00:00:00.000 Started by upstream project "autotest-nightly" build number 4368 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3731 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.139 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.139 The recommended git tool is: git 00:00:00.140 using credential 00000000-0000-0000-0000-000000000002 00:00:00.141 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.189 Fetching changes from the remote Git repository 00:00:00.190 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.237 Using shallow fetch with depth 1 00:00:00.237 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.237 > git --version # timeout=10 00:00:00.275 > git --version # 'git version 2.39.2' 00:00:00.275 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.301 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.301 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.553 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.566 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.578 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.578 > git config core.sparsecheckout # timeout=10 00:00:08.589 > git read-tree -mu HEAD # timeout=10 00:00:08.604 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.621 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.621 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.704 [Pipeline] Start of Pipeline 00:00:08.717 [Pipeline] library 00:00:08.719 Loading library shm_lib@master 00:00:08.719 Library shm_lib@master is cached. Copying from home. 00:00:08.733 [Pipeline] node 00:00:08.745 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:08.746 [Pipeline] { 00:00:08.756 [Pipeline] catchError 00:00:08.758 [Pipeline] { 00:00:08.771 [Pipeline] wrap 00:00:08.780 [Pipeline] { 00:00:08.789 [Pipeline] stage 00:00:08.791 [Pipeline] { (Prologue) 00:00:08.809 [Pipeline] echo 00:00:08.810 Node: VM-host-SM38 00:00:08.817 [Pipeline] cleanWs 00:00:08.827 [WS-CLEANUP] Deleting project workspace... 00:00:08.827 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.834 [WS-CLEANUP] done 00:00:09.016 [Pipeline] setCustomBuildProperty 00:00:09.151 [Pipeline] httpRequest 00:00:09.739 [Pipeline] echo 00:00:09.741 Sorcerer 10.211.164.20 is alive 00:00:09.752 [Pipeline] retry 00:00:09.754 [Pipeline] { 00:00:09.768 [Pipeline] httpRequest 00:00:09.774 HttpMethod: GET 00:00:09.775 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.775 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.806 Response Code: HTTP/1.1 200 OK 00:00:09.807 Success: Status code 200 is in the accepted range: 200,404 00:00:09.808 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:36.990 [Pipeline] } 00:00:37.007 [Pipeline] // retry 00:00:37.015 [Pipeline] sh 00:00:37.301 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:37.318 [Pipeline] httpRequest 00:00:37.665 [Pipeline] echo 00:00:37.667 Sorcerer 10.211.164.20 is alive 00:00:37.677 [Pipeline] retry 00:00:37.679 [Pipeline] { 00:00:37.694 [Pipeline] httpRequest 00:00:37.699 HttpMethod: GET 00:00:37.699 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:37.700 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:37.714 Response Code: HTTP/1.1 200 OK 00:00:37.715 Success: Status code 200 is in the accepted range: 200,404 00:00:37.715 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:11.168 [Pipeline] } 00:01:11.186 [Pipeline] // retry 00:01:11.194 [Pipeline] sh 00:01:11.483 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:14.804 [Pipeline] sh 00:01:15.086 + git -C spdk log --oneline -n5 00:01:15.086 e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:15.086 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:15.086 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:15.086 66289a6db build: use VERSION file for storing version 00:01:15.086 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:15.106 [Pipeline] writeFile 00:01:15.120 [Pipeline] sh 00:01:15.408 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:15.421 [Pipeline] sh 00:01:15.707 + cat autorun-spdk.conf 00:01:15.707 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.707 SPDK_TEST_NVME=1 00:01:15.707 SPDK_TEST_FTL=1 00:01:15.707 SPDK_TEST_ISAL=1 00:01:15.707 SPDK_RUN_ASAN=1 00:01:15.707 SPDK_RUN_UBSAN=1 00:01:15.707 SPDK_TEST_XNVME=1 00:01:15.707 SPDK_TEST_NVME_FDP=1 00:01:15.707 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:15.716 RUN_NIGHTLY=1 00:01:15.718 [Pipeline] } 00:01:15.732 [Pipeline] // stage 00:01:15.746 [Pipeline] stage 00:01:15.748 [Pipeline] { (Run VM) 00:01:15.761 [Pipeline] sh 00:01:16.049 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:16.049 + echo 'Start stage prepare_nvme.sh' 00:01:16.049 Start stage prepare_nvme.sh 00:01:16.049 + [[ -n 8 ]] 00:01:16.049 + disk_prefix=ex8 00:01:16.049 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:16.049 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:16.049 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:16.049 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.049 ++ SPDK_TEST_NVME=1 00:01:16.049 ++ SPDK_TEST_FTL=1 00:01:16.049 ++ SPDK_TEST_ISAL=1 00:01:16.049 ++ SPDK_RUN_ASAN=1 00:01:16.049 ++ SPDK_RUN_UBSAN=1 00:01:16.049 ++ SPDK_TEST_XNVME=1 00:01:16.049 ++ SPDK_TEST_NVME_FDP=1 00:01:16.049 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:16.049 ++ RUN_NIGHTLY=1 00:01:16.049 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:16.049 + nvme_files=() 00:01:16.049 + declare -A nvme_files 00:01:16.049 + backend_dir=/var/lib/libvirt/images/backends 00:01:16.049 + nvme_files['nvme.img']=5G 00:01:16.049 + nvme_files['nvme-cmb.img']=5G 00:01:16.049 + nvme_files['nvme-multi0.img']=4G 00:01:16.049 + nvme_files['nvme-multi1.img']=4G 00:01:16.049 + nvme_files['nvme-multi2.img']=4G 00:01:16.049 + nvme_files['nvme-openstack.img']=8G 00:01:16.049 + nvme_files['nvme-zns.img']=5G 00:01:16.049 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:16.049 + (( SPDK_TEST_FTL == 1 )) 00:01:16.049 + nvme_files["nvme-ftl.img"]=6G 00:01:16.049 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:16.049 + nvme_files["nvme-fdp.img"]=1G 00:01:16.049 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:16.049 + for nvme in "${!nvme_files[@]}" 00:01:16.049 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:01:16.049 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:16.049 + for nvme in "${!nvme_files[@]}" 00:01:16.049 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-ftl.img -s 6G 00:01:16.311 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:16.311 + for nvme in "${!nvme_files[@]}" 00:01:16.311 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:01:16.311 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:16.311 + for nvme in "${!nvme_files[@]}" 00:01:16.311 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:01:16.311 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:16.311 + for nvme in "${!nvme_files[@]}" 00:01:16.311 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:01:16.311 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:16.311 + for nvme in "${!nvme_files[@]}" 00:01:16.311 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:01:16.311 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:16.573 + for nvme in "${!nvme_files[@]}" 00:01:16.573 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:01:16.573 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:16.573 + for nvme in "${!nvme_files[@]}" 00:01:16.573 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-fdp.img -s 1G 00:01:16.573 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:16.573 + for nvme in "${!nvme_files[@]}" 00:01:16.573 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:01:16.573 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:16.573 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:01:16.573 + echo 'End stage prepare_nvme.sh' 00:01:16.573 End stage prepare_nvme.sh 00:01:16.587 [Pipeline] sh 00:01:16.873 + DISTRO=fedora39 00:01:16.873 + CPUS=10 00:01:16.873 + RAM=12288 00:01:16.873 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:16.873 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex8-nvme.img -b /var/lib/libvirt/images/backends/ex8-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex8-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:17.135 00:01:17.135 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:17.135 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:17.135 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:17.135 HELP=0 00:01:17.135 DRY_RUN=0 00:01:17.135 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,/var/lib/libvirt/images/backends/ex8-nvme.img,/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,/var/lib/libvirt/images/backends/ex8-nvme-fdp.img, 00:01:17.135 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:17.135 NVME_AUTO_CREATE=0 00:01:17.135 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,, 00:01:17.135 NVME_CMB=,,,, 00:01:17.135 NVME_PMR=,,,, 00:01:17.135 NVME_ZNS=,,,, 00:01:17.135 NVME_MS=true,,,, 00:01:17.135 NVME_FDP=,,,on, 00:01:17.135 SPDK_VAGRANT_DISTRO=fedora39 00:01:17.135 SPDK_VAGRANT_VMCPU=10 00:01:17.135 SPDK_VAGRANT_VMRAM=12288 00:01:17.135 SPDK_VAGRANT_PROVIDER=libvirt 00:01:17.135 SPDK_VAGRANT_HTTP_PROXY= 00:01:17.135 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:17.135 SPDK_OPENSTACK_NETWORK=0 00:01:17.135 VAGRANT_PACKAGE_BOX=0 00:01:17.135 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:17.135 FORCE_DISTRO=true 00:01:17.135 VAGRANT_BOX_VERSION= 00:01:17.135 EXTRA_VAGRANTFILES= 00:01:17.135 NIC_MODEL=e1000 00:01:17.135 00:01:17.135 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:17.135 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:19.686 Bringing machine 'default' up with 'libvirt' provider... 00:01:20.628 ==> default: Creating image (snapshot of base box volume). 00:01:20.628 ==> default: Creating domain with the following settings... 00:01:20.628 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734351207_b787bef7f45af292b737 00:01:20.628 ==> default: -- Domain type: kvm 00:01:20.628 ==> default: -- Cpus: 10 00:01:20.628 ==> default: -- Feature: acpi 00:01:20.628 ==> default: -- Feature: apic 00:01:20.628 ==> default: -- Feature: pae 00:01:20.628 ==> default: -- Memory: 12288M 00:01:20.628 ==> default: -- Memory Backing: hugepages: 00:01:20.628 ==> default: -- Management MAC: 00:01:20.628 ==> default: -- Loader: 00:01:20.628 ==> default: -- Nvram: 00:01:20.628 ==> default: -- Base box: spdk/fedora39 00:01:20.628 ==> default: -- Storage pool: default 00:01:20.628 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734351207_b787bef7f45af292b737.img (20G) 00:01:20.628 ==> default: -- Volume Cache: default 00:01:20.628 ==> default: -- Kernel: 00:01:20.628 ==> default: -- Initrd: 00:01:20.628 ==> default: -- Graphics Type: vnc 00:01:20.628 ==> default: -- Graphics Port: -1 00:01:20.628 ==> default: -- Graphics IP: 127.0.0.1 00:01:20.628 ==> default: -- Graphics Password: Not defined 00:01:20.628 ==> default: -- Video Type: cirrus 00:01:20.628 ==> default: -- Video VRAM: 9216 00:01:20.628 ==> default: -- Sound Type: 00:01:20.628 ==> default: -- Keymap: en-us 00:01:20.628 ==> default: -- TPM Path: 00:01:20.628 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:20.628 ==> default: -- Command line args: 00:01:20.628 ==> default: -> value=-device, 00:01:20.628 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:20.628 ==> default: -> value=-drive, 00:01:20.628 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:20.628 ==> default: -> value=-device, 00:01:20.628 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:20.628 ==> default: -> value=-device, 00:01:20.628 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:20.628 ==> default: -> value=-drive, 00:01:20.628 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-1-drive0, 00:01:20.628 ==> default: -> value=-device, 00:01:20.628 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.628 ==> default: -> value=-device, 00:01:20.628 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:20.628 ==> default: -> value=-drive, 00:01:20.628 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:20.628 ==> default: -> value=-device, 00:01:20.628 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.628 ==> default: -> value=-drive, 00:01:20.628 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:20.628 ==> default: -> value=-device, 00:01:20.628 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.628 ==> default: -> value=-drive, 00:01:20.628 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:20.628 ==> default: -> value=-device, 00:01:20.628 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.628 ==> default: -> value=-device, 00:01:20.628 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:20.628 ==> default: -> value=-device, 00:01:20.628 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:20.628 ==> default: -> value=-drive, 00:01:20.628 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:20.628 ==> default: -> value=-device, 00:01:20.628 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.890 ==> default: Creating shared folders metadata... 00:01:20.890 ==> default: Starting domain. 00:01:22.278 ==> default: Waiting for domain to get an IP address... 00:01:40.406 ==> default: Waiting for SSH to become available... 00:01:40.406 ==> default: Configuring and enabling network interfaces... 00:01:42.957 default: SSH address: 192.168.121.139:22 00:01:42.957 default: SSH username: vagrant 00:01:42.957 default: SSH auth method: private key 00:01:45.506 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:53.648 ==> default: Mounting SSHFS shared folder... 00:01:55.564 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:55.564 ==> default: Checking Mount.. 00:01:56.506 ==> default: Folder Successfully Mounted! 00:01:56.506 00:01:56.506 SUCCESS! 00:01:56.506 00:01:56.506 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:56.506 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:56.506 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:56.506 00:01:56.518 [Pipeline] } 00:01:56.534 [Pipeline] // stage 00:01:56.543 [Pipeline] dir 00:01:56.544 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:56.545 [Pipeline] { 00:01:56.558 [Pipeline] catchError 00:01:56.559 [Pipeline] { 00:01:56.571 [Pipeline] sh 00:01:56.857 + vagrant ssh-config --host vagrant 00:01:56.857 + sed -ne '/^Host/,$p' 00:01:56.857 + tee ssh_conf 00:01:59.400 Host vagrant 00:01:59.400 HostName 192.168.121.139 00:01:59.400 User vagrant 00:01:59.400 Port 22 00:01:59.400 UserKnownHostsFile /dev/null 00:01:59.400 StrictHostKeyChecking no 00:01:59.400 PasswordAuthentication no 00:01:59.400 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:59.400 IdentitiesOnly yes 00:01:59.400 LogLevel FATAL 00:01:59.400 ForwardAgent yes 00:01:59.400 ForwardX11 yes 00:01:59.400 00:01:59.416 [Pipeline] withEnv 00:01:59.418 [Pipeline] { 00:01:59.432 [Pipeline] sh 00:01:59.717 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:59.717 source /etc/os-release 00:01:59.717 [[ -e /image.version ]] && img=$(< /image.version) 00:01:59.717 # Minimal, systemd-like check. 00:01:59.717 if [[ -e /.dockerenv ]]; then 00:01:59.717 # Clear garbage from the node'\''s name: 00:01:59.717 # agt-er_autotest_547-896 -> autotest_547-896 00:01:59.717 # $HOSTNAME is the actual container id 00:01:59.717 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:59.717 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:59.717 # We can assume this is a mount from a host where container is running, 00:01:59.717 # so fetch its hostname to easily identify the target swarm worker. 00:01:59.717 container="$(< /etc/hostname) ($agent)" 00:01:59.717 else 00:01:59.717 # Fallback 00:01:59.717 container=$agent 00:01:59.717 fi 00:01:59.717 fi 00:01:59.717 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:59.717 ' 00:01:59.993 [Pipeline] } 00:02:00.009 [Pipeline] // withEnv 00:02:00.017 [Pipeline] setCustomBuildProperty 00:02:00.031 [Pipeline] stage 00:02:00.033 [Pipeline] { (Tests) 00:02:00.050 [Pipeline] sh 00:02:00.337 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:00.614 [Pipeline] sh 00:02:00.901 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:01.178 [Pipeline] timeout 00:02:01.179 Timeout set to expire in 50 min 00:02:01.180 [Pipeline] { 00:02:01.194 [Pipeline] sh 00:02:01.478 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:02.051 HEAD is now at e01cb43b8 mk/spdk.common.mk sed the minor version 00:02:02.065 [Pipeline] sh 00:02:02.350 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:02.628 [Pipeline] sh 00:02:02.912 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:03.190 [Pipeline] sh 00:02:03.475 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:03.736 ++ readlink -f spdk_repo 00:02:03.736 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:03.736 + [[ -n /home/vagrant/spdk_repo ]] 00:02:03.736 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:03.736 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:03.736 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:03.736 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:03.736 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:03.736 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:03.736 + cd /home/vagrant/spdk_repo 00:02:03.736 + source /etc/os-release 00:02:03.736 ++ NAME='Fedora Linux' 00:02:03.736 ++ VERSION='39 (Cloud Edition)' 00:02:03.736 ++ ID=fedora 00:02:03.736 ++ VERSION_ID=39 00:02:03.736 ++ VERSION_CODENAME= 00:02:03.736 ++ PLATFORM_ID=platform:f39 00:02:03.736 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:03.736 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:03.736 ++ LOGO=fedora-logo-icon 00:02:03.736 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:03.736 ++ HOME_URL=https://fedoraproject.org/ 00:02:03.736 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:03.736 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:03.736 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:03.736 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:03.736 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:03.736 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:03.736 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:03.736 ++ SUPPORT_END=2024-11-12 00:02:03.736 ++ VARIANT='Cloud Edition' 00:02:03.736 ++ VARIANT_ID=cloud 00:02:03.736 + uname -a 00:02:03.736 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:03.736 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:03.998 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:04.260 Hugepages 00:02:04.260 node hugesize free / total 00:02:04.260 node0 1048576kB 0 / 0 00:02:04.260 node0 2048kB 0 / 0 00:02:04.260 00:02:04.260 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:04.260 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:04.260 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:04.260 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:04.521 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:04.521 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:04.521 + rm -f /tmp/spdk-ld-path 00:02:04.521 + source autorun-spdk.conf 00:02:04.521 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.521 ++ SPDK_TEST_NVME=1 00:02:04.521 ++ SPDK_TEST_FTL=1 00:02:04.521 ++ SPDK_TEST_ISAL=1 00:02:04.521 ++ SPDK_RUN_ASAN=1 00:02:04.521 ++ SPDK_RUN_UBSAN=1 00:02:04.521 ++ SPDK_TEST_XNVME=1 00:02:04.521 ++ SPDK_TEST_NVME_FDP=1 00:02:04.521 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:04.521 ++ RUN_NIGHTLY=1 00:02:04.521 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:04.521 + [[ -n '' ]] 00:02:04.521 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:04.521 + for M in /var/spdk/build-*-manifest.txt 00:02:04.521 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:04.521 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:04.521 + for M in /var/spdk/build-*-manifest.txt 00:02:04.521 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:04.521 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:04.521 + for M in /var/spdk/build-*-manifest.txt 00:02:04.521 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:04.521 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:04.521 ++ uname 00:02:04.521 + [[ Linux == \L\i\n\u\x ]] 00:02:04.521 + sudo dmesg -T 00:02:04.521 + sudo dmesg --clear 00:02:04.521 + dmesg_pid=5041 00:02:04.521 + [[ Fedora Linux == FreeBSD ]] 00:02:04.521 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:04.521 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:04.521 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:04.521 + [[ -x /usr/src/fio-static/fio ]] 00:02:04.521 + sudo dmesg -Tw 00:02:04.521 + export FIO_BIN=/usr/src/fio-static/fio 00:02:04.521 + FIO_BIN=/usr/src/fio-static/fio 00:02:04.521 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:04.521 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:04.521 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:04.521 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:04.521 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:04.521 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:04.521 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:04.521 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:04.521 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:04.521 12:14:11 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:04.521 12:14:11 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:04.521 12:14:11 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.521 12:14:11 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:04.521 12:14:11 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:04.521 12:14:11 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:04.521 12:14:11 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:04.521 12:14:11 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:04.521 12:14:11 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:04.521 12:14:11 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:04.521 12:14:11 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:04.521 12:14:11 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:02:04.521 12:14:11 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:04.522 12:14:11 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:04.782 12:14:11 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:04.782 12:14:11 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:04.782 12:14:11 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:04.782 12:14:11 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:04.782 12:14:11 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:04.782 12:14:11 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:04.782 12:14:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.782 12:14:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.782 12:14:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.782 12:14:11 -- paths/export.sh@5 -- $ export PATH 00:02:04.782 12:14:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.782 12:14:11 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:04.782 12:14:11 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:04.782 12:14:11 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734351251.XXXXXX 00:02:04.782 12:14:11 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734351251.rAbs31 00:02:04.782 12:14:11 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:04.782 12:14:11 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:04.782 12:14:11 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:04.782 12:14:11 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:04.782 12:14:11 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:04.782 12:14:11 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:04.783 12:14:11 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:04.783 12:14:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.783 12:14:11 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:04.783 12:14:11 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:04.783 12:14:11 -- pm/common@17 -- $ local monitor 00:02:04.783 12:14:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:04.783 12:14:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:04.783 12:14:11 -- pm/common@25 -- $ sleep 1 00:02:04.783 12:14:11 -- pm/common@21 -- $ date +%s 00:02:04.783 12:14:11 -- pm/common@21 -- $ date +%s 00:02:04.783 12:14:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734351251 00:02:04.783 12:14:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734351251 00:02:04.783 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734351251_collect-cpu-load.pm.log 00:02:04.783 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734351251_collect-vmstat.pm.log 00:02:05.731 12:14:12 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:05.731 12:14:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:05.731 12:14:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:05.731 12:14:12 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:05.731 12:14:12 -- spdk/autobuild.sh@16 -- $ date -u 00:02:05.731 Mon Dec 16 12:14:12 PM UTC 2024 00:02:05.731 12:14:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:05.731 v25.01-rc1-2-ge01cb43b8 00:02:05.731 12:14:12 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:05.731 12:14:12 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:05.731 12:14:12 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:05.731 12:14:12 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:05.731 12:14:12 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.731 ************************************ 00:02:05.731 START TEST asan 00:02:05.731 ************************************ 00:02:05.731 using asan 00:02:05.731 12:14:12 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:05.731 00:02:05.731 real 0m0.000s 00:02:05.731 user 0m0.000s 00:02:05.731 sys 0m0.000s 00:02:05.731 ************************************ 00:02:05.731 END TEST asan 00:02:05.731 ************************************ 00:02:05.731 12:14:12 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:05.731 12:14:12 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:05.731 12:14:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:05.731 12:14:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:05.731 12:14:12 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:05.731 12:14:12 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:05.731 12:14:12 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.731 ************************************ 00:02:05.731 START TEST ubsan 00:02:05.731 ************************************ 00:02:05.731 using ubsan 00:02:05.731 12:14:12 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:05.731 00:02:05.731 real 0m0.000s 00:02:05.731 user 0m0.000s 00:02:05.731 sys 0m0.000s 00:02:05.731 ************************************ 00:02:05.731 END TEST ubsan 00:02:05.731 ************************************ 00:02:05.731 12:14:12 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:05.731 12:14:12 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:05.993 12:14:12 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:05.993 12:14:12 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:05.993 12:14:12 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:05.993 12:14:12 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:05.993 12:14:12 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:05.993 12:14:12 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:05.993 12:14:12 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:05.993 12:14:12 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:05.993 12:14:12 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:05.993 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:05.993 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:06.565 Using 'verbs' RDMA provider 00:02:19.739 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:29.746 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:29.746 Creating mk/config.mk...done. 00:02:29.746 Creating mk/cc.flags.mk...done. 00:02:29.746 Type 'make' to build. 00:02:29.746 12:14:36 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:29.746 12:14:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:29.746 12:14:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:29.746 12:14:36 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.746 ************************************ 00:02:29.746 START TEST make 00:02:29.746 ************************************ 00:02:29.746 12:14:36 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:29.746 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:29.746 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:29.746 meson setup builddir \ 00:02:29.746 -Dwith-libaio=enabled \ 00:02:29.746 -Dwith-liburing=enabled \ 00:02:29.746 -Dwith-libvfn=disabled \ 00:02:29.746 -Dwith-spdk=disabled \ 00:02:29.746 -Dexamples=false \ 00:02:29.746 -Dtests=false \ 00:02:29.746 -Dtools=false && \ 00:02:29.746 meson compile -C builddir && \ 00:02:29.746 cd -) 00:02:32.294 The Meson build system 00:02:32.294 Version: 1.5.0 00:02:32.294 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:32.294 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:32.294 Build type: native build 00:02:32.294 Project name: xnvme 00:02:32.294 Project version: 0.7.5 00:02:32.294 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:32.294 C linker for the host machine: cc ld.bfd 2.40-14 00:02:32.294 Host machine cpu family: x86_64 00:02:32.294 Host machine cpu: x86_64 00:02:32.294 Message: host_machine.system: linux 00:02:32.294 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:32.294 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:32.294 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:32.294 Run-time dependency threads found: YES 00:02:32.294 Has header "setupapi.h" : NO 00:02:32.294 Has header "linux/blkzoned.h" : YES 00:02:32.294 Has header "linux/blkzoned.h" : YES (cached) 00:02:32.294 Has header "libaio.h" : YES 00:02:32.294 Library aio found: YES 00:02:32.294 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:32.294 Run-time dependency liburing found: YES 2.2 00:02:32.294 Dependency libvfn skipped: feature with-libvfn disabled 00:02:32.294 Found CMake: /usr/bin/cmake (3.27.7) 00:02:32.294 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:32.294 Subproject spdk : skipped: feature with-spdk disabled 00:02:32.294 Run-time dependency appleframeworks found: NO (tried framework) 00:02:32.294 Run-time dependency appleframeworks found: NO (tried framework) 00:02:32.294 Library rt found: YES 00:02:32.294 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:32.294 Configuring xnvme_config.h using configuration 00:02:32.294 Configuring xnvme.spec using configuration 00:02:32.294 Run-time dependency bash-completion found: YES 2.11 00:02:32.294 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:32.294 Program cp found: YES (/usr/bin/cp) 00:02:32.294 Build targets in project: 3 00:02:32.294 00:02:32.294 xnvme 0.7.5 00:02:32.294 00:02:32.294 Subprojects 00:02:32.294 spdk : NO Feature 'with-spdk' disabled 00:02:32.294 00:02:32.294 User defined options 00:02:32.294 examples : false 00:02:32.294 tests : false 00:02:32.294 tools : false 00:02:32.294 with-libaio : enabled 00:02:32.294 with-liburing: enabled 00:02:32.294 with-libvfn : disabled 00:02:32.294 with-spdk : disabled 00:02:32.294 00:02:32.294 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:32.605 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:32.605 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:32.605 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:32.891 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:32.891 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:32.891 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:32.891 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:32.891 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:32.891 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:32.891 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:32.891 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:32.891 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:32.891 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:32.891 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:32.891 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:32.891 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:32.891 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:32.891 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:32.891 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:32.891 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:32.891 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:32.891 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:32.891 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:32.891 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:32.891 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:32.891 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:32.891 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:32.891 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:32.891 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:32.891 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:32.891 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:33.152 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:33.152 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:33.152 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:33.152 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:33.152 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:33.152 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:33.152 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:33.152 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:33.152 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:33.152 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:33.152 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:33.152 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:33.152 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:33.152 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:33.152 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:33.152 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:33.152 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:33.152 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:33.152 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:33.152 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:33.152 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:33.152 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:33.152 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:33.152 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:33.152 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:33.152 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:33.152 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:33.152 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:33.413 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:33.413 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:33.413 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:33.413 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:33.413 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:33.413 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:33.413 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:33.413 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:33.413 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:33.413 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:33.413 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:33.413 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:33.413 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:33.413 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:33.672 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:33.930 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:33.930 [75/76] Linking static target lib/libxnvme.a 00:02:33.930 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:33.930 INFO: autodetecting backend as ninja 00:02:33.930 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:33.930 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:40.495 The Meson build system 00:02:40.495 Version: 1.5.0 00:02:40.495 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:40.495 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:40.495 Build type: native build 00:02:40.495 Program cat found: YES (/usr/bin/cat) 00:02:40.495 Project name: DPDK 00:02:40.495 Project version: 24.03.0 00:02:40.495 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:40.495 C linker for the host machine: cc ld.bfd 2.40-14 00:02:40.495 Host machine cpu family: x86_64 00:02:40.495 Host machine cpu: x86_64 00:02:40.495 Message: ## Building in Developer Mode ## 00:02:40.495 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:40.495 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:40.495 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:40.495 Program python3 found: YES (/usr/bin/python3) 00:02:40.495 Program cat found: YES (/usr/bin/cat) 00:02:40.495 Compiler for C supports arguments -march=native: YES 00:02:40.495 Checking for size of "void *" : 8 00:02:40.495 Checking for size of "void *" : 8 (cached) 00:02:40.495 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:40.495 Library m found: YES 00:02:40.495 Library numa found: YES 00:02:40.495 Has header "numaif.h" : YES 00:02:40.495 Library fdt found: NO 00:02:40.495 Library execinfo found: NO 00:02:40.495 Has header "execinfo.h" : YES 00:02:40.495 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:40.495 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:40.495 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:40.495 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:40.495 Run-time dependency openssl found: YES 3.1.1 00:02:40.495 Run-time dependency libpcap found: YES 1.10.4 00:02:40.495 Has header "pcap.h" with dependency libpcap: YES 00:02:40.495 Compiler for C supports arguments -Wcast-qual: YES 00:02:40.495 Compiler for C supports arguments -Wdeprecated: YES 00:02:40.495 Compiler for C supports arguments -Wformat: YES 00:02:40.495 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:40.495 Compiler for C supports arguments -Wformat-security: NO 00:02:40.495 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:40.495 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:40.496 Compiler for C supports arguments -Wnested-externs: YES 00:02:40.496 Compiler for C supports arguments -Wold-style-definition: YES 00:02:40.496 Compiler for C supports arguments -Wpointer-arith: YES 00:02:40.496 Compiler for C supports arguments -Wsign-compare: YES 00:02:40.496 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:40.496 Compiler for C supports arguments -Wundef: YES 00:02:40.496 Compiler for C supports arguments -Wwrite-strings: YES 00:02:40.496 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:40.496 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:40.496 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:40.496 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:40.496 Program objdump found: YES (/usr/bin/objdump) 00:02:40.496 Compiler for C supports arguments -mavx512f: YES 00:02:40.496 Checking if "AVX512 checking" compiles: YES 00:02:40.496 Fetching value of define "__SSE4_2__" : 1 00:02:40.496 Fetching value of define "__AES__" : 1 00:02:40.496 Fetching value of define "__AVX__" : 1 00:02:40.496 Fetching value of define "__AVX2__" : 1 00:02:40.496 Fetching value of define "__AVX512BW__" : 1 00:02:40.496 Fetching value of define "__AVX512CD__" : 1 00:02:40.496 Fetching value of define "__AVX512DQ__" : 1 00:02:40.496 Fetching value of define "__AVX512F__" : 1 00:02:40.496 Fetching value of define "__AVX512VL__" : 1 00:02:40.496 Fetching value of define "__PCLMUL__" : 1 00:02:40.496 Fetching value of define "__RDRND__" : 1 00:02:40.496 Fetching value of define "__RDSEED__" : 1 00:02:40.496 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:40.496 Fetching value of define "__znver1__" : (undefined) 00:02:40.496 Fetching value of define "__znver2__" : (undefined) 00:02:40.496 Fetching value of define "__znver3__" : (undefined) 00:02:40.496 Fetching value of define "__znver4__" : (undefined) 00:02:40.496 Library asan found: YES 00:02:40.496 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:40.496 Message: lib/log: Defining dependency "log" 00:02:40.496 Message: lib/kvargs: Defining dependency "kvargs" 00:02:40.496 Message: lib/telemetry: Defining dependency "telemetry" 00:02:40.496 Library rt found: YES 00:02:40.496 Checking for function "getentropy" : NO 00:02:40.496 Message: lib/eal: Defining dependency "eal" 00:02:40.496 Message: lib/ring: Defining dependency "ring" 00:02:40.496 Message: lib/rcu: Defining dependency "rcu" 00:02:40.496 Message: lib/mempool: Defining dependency "mempool" 00:02:40.496 Message: lib/mbuf: Defining dependency "mbuf" 00:02:40.496 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:40.496 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:40.496 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:40.496 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:40.496 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:40.496 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:40.496 Compiler for C supports arguments -mpclmul: YES 00:02:40.496 Compiler for C supports arguments -maes: YES 00:02:40.496 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:40.496 Compiler for C supports arguments -mavx512bw: YES 00:02:40.496 Compiler for C supports arguments -mavx512dq: YES 00:02:40.496 Compiler for C supports arguments -mavx512vl: YES 00:02:40.496 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:40.496 Compiler for C supports arguments -mavx2: YES 00:02:40.496 Compiler for C supports arguments -mavx: YES 00:02:40.496 Message: lib/net: Defining dependency "net" 00:02:40.496 Message: lib/meter: Defining dependency "meter" 00:02:40.496 Message: lib/ethdev: Defining dependency "ethdev" 00:02:40.496 Message: lib/pci: Defining dependency "pci" 00:02:40.496 Message: lib/cmdline: Defining dependency "cmdline" 00:02:40.496 Message: lib/hash: Defining dependency "hash" 00:02:40.496 Message: lib/timer: Defining dependency "timer" 00:02:40.496 Message: lib/compressdev: Defining dependency "compressdev" 00:02:40.496 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:40.496 Message: lib/dmadev: Defining dependency "dmadev" 00:02:40.496 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:40.496 Message: lib/power: Defining dependency "power" 00:02:40.496 Message: lib/reorder: Defining dependency "reorder" 00:02:40.496 Message: lib/security: Defining dependency "security" 00:02:40.496 Has header "linux/userfaultfd.h" : YES 00:02:40.496 Has header "linux/vduse.h" : YES 00:02:40.496 Message: lib/vhost: Defining dependency "vhost" 00:02:40.496 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:40.496 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:40.496 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:40.496 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:40.496 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:40.496 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:40.496 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:40.496 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:40.496 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:40.496 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:40.496 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:40.496 Configuring doxy-api-html.conf using configuration 00:02:40.496 Configuring doxy-api-man.conf using configuration 00:02:40.496 Program mandb found: YES (/usr/bin/mandb) 00:02:40.496 Program sphinx-build found: NO 00:02:40.496 Configuring rte_build_config.h using configuration 00:02:40.496 Message: 00:02:40.496 ================= 00:02:40.496 Applications Enabled 00:02:40.496 ================= 00:02:40.496 00:02:40.496 apps: 00:02:40.496 00:02:40.496 00:02:40.496 Message: 00:02:40.496 ================= 00:02:40.496 Libraries Enabled 00:02:40.496 ================= 00:02:40.496 00:02:40.496 libs: 00:02:40.496 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:40.496 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:40.496 cryptodev, dmadev, power, reorder, security, vhost, 00:02:40.496 00:02:40.496 Message: 00:02:40.496 =============== 00:02:40.496 Drivers Enabled 00:02:40.496 =============== 00:02:40.496 00:02:40.496 common: 00:02:40.496 00:02:40.496 bus: 00:02:40.496 pci, vdev, 00:02:40.496 mempool: 00:02:40.496 ring, 00:02:40.496 dma: 00:02:40.496 00:02:40.496 net: 00:02:40.496 00:02:40.496 crypto: 00:02:40.496 00:02:40.496 compress: 00:02:40.496 00:02:40.496 vdpa: 00:02:40.496 00:02:40.496 00:02:40.496 Message: 00:02:40.496 ================= 00:02:40.496 Content Skipped 00:02:40.496 ================= 00:02:40.496 00:02:40.496 apps: 00:02:40.496 dumpcap: explicitly disabled via build config 00:02:40.496 graph: explicitly disabled via build config 00:02:40.496 pdump: explicitly disabled via build config 00:02:40.496 proc-info: explicitly disabled via build config 00:02:40.496 test-acl: explicitly disabled via build config 00:02:40.496 test-bbdev: explicitly disabled via build config 00:02:40.496 test-cmdline: explicitly disabled via build config 00:02:40.496 test-compress-perf: explicitly disabled via build config 00:02:40.496 test-crypto-perf: explicitly disabled via build config 00:02:40.496 test-dma-perf: explicitly disabled via build config 00:02:40.496 test-eventdev: explicitly disabled via build config 00:02:40.496 test-fib: explicitly disabled via build config 00:02:40.496 test-flow-perf: explicitly disabled via build config 00:02:40.496 test-gpudev: explicitly disabled via build config 00:02:40.496 test-mldev: explicitly disabled via build config 00:02:40.496 test-pipeline: explicitly disabled via build config 00:02:40.496 test-pmd: explicitly disabled via build config 00:02:40.496 test-regex: explicitly disabled via build config 00:02:40.496 test-sad: explicitly disabled via build config 00:02:40.496 test-security-perf: explicitly disabled via build config 00:02:40.496 00:02:40.496 libs: 00:02:40.496 argparse: explicitly disabled via build config 00:02:40.496 metrics: explicitly disabled via build config 00:02:40.496 acl: explicitly disabled via build config 00:02:40.496 bbdev: explicitly disabled via build config 00:02:40.496 bitratestats: explicitly disabled via build config 00:02:40.496 bpf: explicitly disabled via build config 00:02:40.496 cfgfile: explicitly disabled via build config 00:02:40.496 distributor: explicitly disabled via build config 00:02:40.496 efd: explicitly disabled via build config 00:02:40.496 eventdev: explicitly disabled via build config 00:02:40.496 dispatcher: explicitly disabled via build config 00:02:40.496 gpudev: explicitly disabled via build config 00:02:40.496 gro: explicitly disabled via build config 00:02:40.496 gso: explicitly disabled via build config 00:02:40.496 ip_frag: explicitly disabled via build config 00:02:40.496 jobstats: explicitly disabled via build config 00:02:40.496 latencystats: explicitly disabled via build config 00:02:40.496 lpm: explicitly disabled via build config 00:02:40.496 member: explicitly disabled via build config 00:02:40.496 pcapng: explicitly disabled via build config 00:02:40.496 rawdev: explicitly disabled via build config 00:02:40.496 regexdev: explicitly disabled via build config 00:02:40.496 mldev: explicitly disabled via build config 00:02:40.496 rib: explicitly disabled via build config 00:02:40.496 sched: explicitly disabled via build config 00:02:40.496 stack: explicitly disabled via build config 00:02:40.496 ipsec: explicitly disabled via build config 00:02:40.496 pdcp: explicitly disabled via build config 00:02:40.496 fib: explicitly disabled via build config 00:02:40.496 port: explicitly disabled via build config 00:02:40.496 pdump: explicitly disabled via build config 00:02:40.496 table: explicitly disabled via build config 00:02:40.496 pipeline: explicitly disabled via build config 00:02:40.496 graph: explicitly disabled via build config 00:02:40.496 node: explicitly disabled via build config 00:02:40.496 00:02:40.496 drivers: 00:02:40.496 common/cpt: not in enabled drivers build config 00:02:40.496 common/dpaax: not in enabled drivers build config 00:02:40.496 common/iavf: not in enabled drivers build config 00:02:40.496 common/idpf: not in enabled drivers build config 00:02:40.496 common/ionic: not in enabled drivers build config 00:02:40.496 common/mvep: not in enabled drivers build config 00:02:40.496 common/octeontx: not in enabled drivers build config 00:02:40.496 bus/auxiliary: not in enabled drivers build config 00:02:40.497 bus/cdx: not in enabled drivers build config 00:02:40.497 bus/dpaa: not in enabled drivers build config 00:02:40.497 bus/fslmc: not in enabled drivers build config 00:02:40.497 bus/ifpga: not in enabled drivers build config 00:02:40.497 bus/platform: not in enabled drivers build config 00:02:40.497 bus/uacce: not in enabled drivers build config 00:02:40.497 bus/vmbus: not in enabled drivers build config 00:02:40.497 common/cnxk: not in enabled drivers build config 00:02:40.497 common/mlx5: not in enabled drivers build config 00:02:40.497 common/nfp: not in enabled drivers build config 00:02:40.497 common/nitrox: not in enabled drivers build config 00:02:40.497 common/qat: not in enabled drivers build config 00:02:40.497 common/sfc_efx: not in enabled drivers build config 00:02:40.497 mempool/bucket: not in enabled drivers build config 00:02:40.497 mempool/cnxk: not in enabled drivers build config 00:02:40.497 mempool/dpaa: not in enabled drivers build config 00:02:40.497 mempool/dpaa2: not in enabled drivers build config 00:02:40.497 mempool/octeontx: not in enabled drivers build config 00:02:40.497 mempool/stack: not in enabled drivers build config 00:02:40.497 dma/cnxk: not in enabled drivers build config 00:02:40.497 dma/dpaa: not in enabled drivers build config 00:02:40.497 dma/dpaa2: not in enabled drivers build config 00:02:40.497 dma/hisilicon: not in enabled drivers build config 00:02:40.497 dma/idxd: not in enabled drivers build config 00:02:40.497 dma/ioat: not in enabled drivers build config 00:02:40.497 dma/skeleton: not in enabled drivers build config 00:02:40.497 net/af_packet: not in enabled drivers build config 00:02:40.497 net/af_xdp: not in enabled drivers build config 00:02:40.497 net/ark: not in enabled drivers build config 00:02:40.497 net/atlantic: not in enabled drivers build config 00:02:40.497 net/avp: not in enabled drivers build config 00:02:40.497 net/axgbe: not in enabled drivers build config 00:02:40.497 net/bnx2x: not in enabled drivers build config 00:02:40.497 net/bnxt: not in enabled drivers build config 00:02:40.497 net/bonding: not in enabled drivers build config 00:02:40.497 net/cnxk: not in enabled drivers build config 00:02:40.497 net/cpfl: not in enabled drivers build config 00:02:40.497 net/cxgbe: not in enabled drivers build config 00:02:40.497 net/dpaa: not in enabled drivers build config 00:02:40.497 net/dpaa2: not in enabled drivers build config 00:02:40.497 net/e1000: not in enabled drivers build config 00:02:40.497 net/ena: not in enabled drivers build config 00:02:40.497 net/enetc: not in enabled drivers build config 00:02:40.497 net/enetfec: not in enabled drivers build config 00:02:40.497 net/enic: not in enabled drivers build config 00:02:40.497 net/failsafe: not in enabled drivers build config 00:02:40.497 net/fm10k: not in enabled drivers build config 00:02:40.497 net/gve: not in enabled drivers build config 00:02:40.497 net/hinic: not in enabled drivers build config 00:02:40.497 net/hns3: not in enabled drivers build config 00:02:40.497 net/i40e: not in enabled drivers build config 00:02:40.497 net/iavf: not in enabled drivers build config 00:02:40.497 net/ice: not in enabled drivers build config 00:02:40.497 net/idpf: not in enabled drivers build config 00:02:40.497 net/igc: not in enabled drivers build config 00:02:40.497 net/ionic: not in enabled drivers build config 00:02:40.497 net/ipn3ke: not in enabled drivers build config 00:02:40.497 net/ixgbe: not in enabled drivers build config 00:02:40.497 net/mana: not in enabled drivers build config 00:02:40.497 net/memif: not in enabled drivers build config 00:02:40.497 net/mlx4: not in enabled drivers build config 00:02:40.497 net/mlx5: not in enabled drivers build config 00:02:40.497 net/mvneta: not in enabled drivers build config 00:02:40.497 net/mvpp2: not in enabled drivers build config 00:02:40.497 net/netvsc: not in enabled drivers build config 00:02:40.497 net/nfb: not in enabled drivers build config 00:02:40.497 net/nfp: not in enabled drivers build config 00:02:40.497 net/ngbe: not in enabled drivers build config 00:02:40.497 net/null: not in enabled drivers build config 00:02:40.497 net/octeontx: not in enabled drivers build config 00:02:40.497 net/octeon_ep: not in enabled drivers build config 00:02:40.497 net/pcap: not in enabled drivers build config 00:02:40.497 net/pfe: not in enabled drivers build config 00:02:40.497 net/qede: not in enabled drivers build config 00:02:40.497 net/ring: not in enabled drivers build config 00:02:40.497 net/sfc: not in enabled drivers build config 00:02:40.497 net/softnic: not in enabled drivers build config 00:02:40.497 net/tap: not in enabled drivers build config 00:02:40.497 net/thunderx: not in enabled drivers build config 00:02:40.497 net/txgbe: not in enabled drivers build config 00:02:40.497 net/vdev_netvsc: not in enabled drivers build config 00:02:40.497 net/vhost: not in enabled drivers build config 00:02:40.497 net/virtio: not in enabled drivers build config 00:02:40.497 net/vmxnet3: not in enabled drivers build config 00:02:40.497 raw/*: missing internal dependency, "rawdev" 00:02:40.497 crypto/armv8: not in enabled drivers build config 00:02:40.497 crypto/bcmfs: not in enabled drivers build config 00:02:40.497 crypto/caam_jr: not in enabled drivers build config 00:02:40.497 crypto/ccp: not in enabled drivers build config 00:02:40.497 crypto/cnxk: not in enabled drivers build config 00:02:40.497 crypto/dpaa_sec: not in enabled drivers build config 00:02:40.497 crypto/dpaa2_sec: not in enabled drivers build config 00:02:40.497 crypto/ipsec_mb: not in enabled drivers build config 00:02:40.497 crypto/mlx5: not in enabled drivers build config 00:02:40.497 crypto/mvsam: not in enabled drivers build config 00:02:40.497 crypto/nitrox: not in enabled drivers build config 00:02:40.497 crypto/null: not in enabled drivers build config 00:02:40.497 crypto/octeontx: not in enabled drivers build config 00:02:40.497 crypto/openssl: not in enabled drivers build config 00:02:40.497 crypto/scheduler: not in enabled drivers build config 00:02:40.497 crypto/uadk: not in enabled drivers build config 00:02:40.497 crypto/virtio: not in enabled drivers build config 00:02:40.497 compress/isal: not in enabled drivers build config 00:02:40.497 compress/mlx5: not in enabled drivers build config 00:02:40.497 compress/nitrox: not in enabled drivers build config 00:02:40.497 compress/octeontx: not in enabled drivers build config 00:02:40.497 compress/zlib: not in enabled drivers build config 00:02:40.497 regex/*: missing internal dependency, "regexdev" 00:02:40.497 ml/*: missing internal dependency, "mldev" 00:02:40.497 vdpa/ifc: not in enabled drivers build config 00:02:40.497 vdpa/mlx5: not in enabled drivers build config 00:02:40.497 vdpa/nfp: not in enabled drivers build config 00:02:40.497 vdpa/sfc: not in enabled drivers build config 00:02:40.497 event/*: missing internal dependency, "eventdev" 00:02:40.497 baseband/*: missing internal dependency, "bbdev" 00:02:40.497 gpu/*: missing internal dependency, "gpudev" 00:02:40.497 00:02:40.497 00:02:40.497 Build targets in project: 84 00:02:40.497 00:02:40.497 DPDK 24.03.0 00:02:40.497 00:02:40.497 User defined options 00:02:40.497 buildtype : debug 00:02:40.497 default_library : shared 00:02:40.497 libdir : lib 00:02:40.497 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:40.497 b_sanitize : address 00:02:40.497 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:40.497 c_link_args : 00:02:40.497 cpu_instruction_set: native 00:02:40.497 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:40.497 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:40.497 enable_docs : false 00:02:40.497 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:40.497 enable_kmods : false 00:02:40.497 max_lcores : 128 00:02:40.497 tests : false 00:02:40.497 00:02:40.497 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:40.497 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:40.497 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:40.497 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:40.497 [3/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:40.497 [4/267] Linking static target lib/librte_kvargs.a 00:02:40.497 [5/267] Linking static target lib/librte_log.a 00:02:40.497 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:40.776 [7/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.776 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:40.776 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:40.776 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:40.776 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:40.776 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:40.776 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:40.776 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:40.776 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:41.035 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:41.035 [17/267] Linking static target lib/librte_telemetry.a 00:02:41.035 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:41.035 [19/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.035 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:41.035 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:41.035 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:41.035 [23/267] Linking target lib/librte_log.so.24.1 00:02:41.035 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:41.035 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:41.292 [26/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:41.292 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:41.292 [28/267] Linking target lib/librte_kvargs.so.24.1 00:02:41.292 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:41.292 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:41.550 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:41.550 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:41.550 [33/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:41.550 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:41.550 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:41.550 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:41.550 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:41.550 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:41.550 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:41.550 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:41.807 [41/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.808 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:41.808 [43/267] Linking target lib/librte_telemetry.so.24.1 00:02:41.808 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:41.808 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:41.808 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:41.808 [47/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:42.066 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:42.066 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:42.066 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:42.066 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:42.066 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:42.066 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:42.066 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:42.066 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:42.324 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:42.324 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:42.324 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:42.324 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:42.324 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:42.324 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:42.324 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:42.324 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:42.324 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:42.582 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:42.582 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:42.582 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:42.582 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:42.841 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:42.841 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:42.841 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:42.841 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:42.841 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:42.841 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:42.841 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:42.841 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:42.841 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:42.841 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:43.099 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:43.099 [80/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:43.099 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:43.099 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:43.099 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:43.357 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:43.357 [85/267] Linking static target lib/librte_eal.a 00:02:43.357 [86/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:43.357 [87/267] Linking static target lib/librte_ring.a 00:02:43.357 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:43.357 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:43.615 [90/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:43.615 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:43.615 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:43.615 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:43.615 [94/267] Linking static target lib/librte_mempool.a 00:02:43.615 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:43.615 [96/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.615 [97/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:43.615 [98/267] Linking static target lib/librte_rcu.a 00:02:43.874 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:43.874 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:43.874 [101/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:43.874 [102/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:43.874 [103/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:44.132 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:44.132 [105/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.132 [106/267] Linking static target lib/librte_net.a 00:02:44.132 [107/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:44.132 [108/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:44.132 [109/267] Linking static target lib/librte_meter.a 00:02:44.132 [110/267] Linking static target lib/librte_mbuf.a 00:02:44.132 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:44.132 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:44.132 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:44.390 [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.390 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:44.390 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.390 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.648 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:44.648 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:44.648 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:44.906 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:44.906 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:44.906 [123/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.906 [124/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:44.906 [125/267] Linking static target lib/librte_pci.a 00:02:44.906 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:45.164 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:45.164 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:45.164 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:45.164 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:45.164 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:45.164 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:45.164 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:45.164 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:45.164 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:45.164 [136/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.164 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:45.164 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:45.422 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:45.422 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:45.422 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:45.422 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:45.422 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:45.422 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:45.422 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:45.422 [146/267] Linking static target lib/librte_cmdline.a 00:02:45.680 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:45.680 [148/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:45.680 [149/267] Linking static target lib/librte_timer.a 00:02:45.680 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:45.680 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:45.680 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:45.938 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:45.938 [154/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:45.938 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:45.938 [156/267] Linking static target lib/librte_ethdev.a 00:02:45.938 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:46.195 [158/267] Linking static target lib/librte_compressdev.a 00:02:46.195 [159/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:46.195 [160/267] Linking static target lib/librte_hash.a 00:02:46.195 [161/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:46.195 [162/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.196 [163/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:46.196 [164/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:46.454 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:46.454 [166/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:46.454 [167/267] Linking static target lib/librte_dmadev.a 00:02:46.454 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:46.454 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:46.454 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:46.454 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:46.777 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.777 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:46.777 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.777 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:46.777 [176/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:47.052 [177/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:47.052 [178/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.052 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:47.052 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:47.052 [181/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.052 [182/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:47.052 [183/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:47.052 [184/267] Linking static target lib/librte_power.a 00:02:47.052 [185/267] Linking static target lib/librte_cryptodev.a 00:02:47.310 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:47.310 [187/267] Linking static target lib/librte_reorder.a 00:02:47.310 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:47.310 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:47.310 [190/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:47.310 [191/267] Linking static target lib/librte_security.a 00:02:47.568 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:47.568 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.826 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:47.826 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.826 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.084 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:48.084 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:48.084 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:48.084 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:48.084 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:48.342 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:48.342 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:48.342 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:48.342 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:48.342 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:48.342 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:48.600 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:48.600 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:48.600 [210/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:48.600 [211/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:48.600 [212/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:48.600 [213/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:48.600 [214/267] Linking static target drivers/librte_bus_pci.a 00:02:48.600 [215/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:48.600 [216/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:48.600 [217/267] Linking static target drivers/librte_bus_vdev.a 00:02:48.859 [218/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:48.859 [219/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:48.859 [220/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.859 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:48.859 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:48.859 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:48.859 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:48.859 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.118 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.376 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:50.312 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.312 [229/267] Linking target lib/librte_eal.so.24.1 00:02:50.570 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:50.570 [231/267] Linking target lib/librte_timer.so.24.1 00:02:50.570 [232/267] Linking target lib/librte_meter.so.24.1 00:02:50.570 [233/267] Linking target lib/librte_pci.so.24.1 00:02:50.570 [234/267] Linking target lib/librte_ring.so.24.1 00:02:50.570 [235/267] Linking target lib/librte_dmadev.so.24.1 00:02:50.570 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:50.570 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:50.570 [238/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:50.570 [239/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:50.570 [240/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:50.570 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:50.570 [242/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:50.570 [243/267] Linking target lib/librte_mempool.so.24.1 00:02:50.570 [244/267] Linking target lib/librte_rcu.so.24.1 00:02:50.829 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:50.829 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:50.829 [247/267] Linking target lib/librte_mbuf.so.24.1 00:02:50.829 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:50.829 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:50.829 [250/267] Linking target lib/librte_net.so.24.1 00:02:50.829 [251/267] Linking target lib/librte_compressdev.so.24.1 00:02:50.829 [252/267] Linking target lib/librte_reorder.so.24.1 00:02:50.829 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:02:51.087 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:51.087 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:51.087 [256/267] Linking target lib/librte_cmdline.so.24.1 00:02:51.087 [257/267] Linking target lib/librte_hash.so.24.1 00:02:51.087 [258/267] Linking target lib/librte_security.so.24.1 00:02:51.087 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:51.346 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.346 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:51.346 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:51.604 [263/267] Linking target lib/librte_power.so.24.1 00:02:52.171 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:52.171 [265/267] Linking static target lib/librte_vhost.a 00:02:53.105 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.105 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:53.105 INFO: autodetecting backend as ninja 00:02:53.105 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:08.043 CC lib/ut_mock/mock.o 00:03:08.043 CC lib/ut/ut.o 00:03:08.043 CC lib/log/log.o 00:03:08.043 CC lib/log/log_flags.o 00:03:08.043 CC lib/log/log_deprecated.o 00:03:08.043 LIB libspdk_ut_mock.a 00:03:08.043 LIB libspdk_log.a 00:03:08.043 LIB libspdk_ut.a 00:03:08.043 SO libspdk_ut_mock.so.6.0 00:03:08.043 SO libspdk_log.so.7.1 00:03:08.043 SO libspdk_ut.so.2.0 00:03:08.043 SYMLINK libspdk_ut_mock.so 00:03:08.043 SYMLINK libspdk_ut.so 00:03:08.043 SYMLINK libspdk_log.so 00:03:08.043 CC lib/util/base64.o 00:03:08.043 CC lib/util/bit_array.o 00:03:08.043 CC lib/util/cpuset.o 00:03:08.043 CC lib/util/crc32c.o 00:03:08.043 CC lib/util/crc16.o 00:03:08.043 CC lib/util/crc32.o 00:03:08.043 CC lib/ioat/ioat.o 00:03:08.043 CXX lib/trace_parser/trace.o 00:03:08.043 CC lib/dma/dma.o 00:03:08.043 CC lib/vfio_user/host/vfio_user_pci.o 00:03:08.043 CC lib/util/crc32_ieee.o 00:03:08.043 CC lib/util/crc64.o 00:03:08.043 CC lib/util/dif.o 00:03:08.043 CC lib/util/fd.o 00:03:08.043 CC lib/util/fd_group.o 00:03:08.043 CC lib/util/file.o 00:03:08.043 LIB libspdk_dma.a 00:03:08.043 LIB libspdk_ioat.a 00:03:08.043 SO libspdk_dma.so.5.0 00:03:08.043 CC lib/vfio_user/host/vfio_user.o 00:03:08.043 SO libspdk_ioat.so.7.0 00:03:08.043 CC lib/util/hexlify.o 00:03:08.043 SYMLINK libspdk_dma.so 00:03:08.043 CC lib/util/iov.o 00:03:08.043 CC lib/util/math.o 00:03:08.043 SYMLINK libspdk_ioat.so 00:03:08.043 CC lib/util/net.o 00:03:08.043 CC lib/util/pipe.o 00:03:08.043 CC lib/util/strerror_tls.o 00:03:08.043 CC lib/util/string.o 00:03:08.043 CC lib/util/uuid.o 00:03:08.043 LIB libspdk_vfio_user.a 00:03:08.043 CC lib/util/xor.o 00:03:08.043 SO libspdk_vfio_user.so.5.0 00:03:08.043 CC lib/util/zipf.o 00:03:08.043 CC lib/util/md5.o 00:03:08.043 SYMLINK libspdk_vfio_user.so 00:03:08.043 LIB libspdk_util.a 00:03:08.043 SO libspdk_util.so.10.1 00:03:08.043 SYMLINK libspdk_util.so 00:03:08.043 LIB libspdk_trace_parser.a 00:03:08.043 SO libspdk_trace_parser.so.6.0 00:03:08.043 SYMLINK libspdk_trace_parser.so 00:03:08.043 CC lib/json/json_parse.o 00:03:08.043 CC lib/json/json_write.o 00:03:08.043 CC lib/json/json_util.o 00:03:08.043 CC lib/idxd/idxd_user.o 00:03:08.043 CC lib/idxd/idxd.o 00:03:08.043 CC lib/idxd/idxd_kernel.o 00:03:08.043 CC lib/conf/conf.o 00:03:08.043 CC lib/env_dpdk/env.o 00:03:08.043 CC lib/vmd/vmd.o 00:03:08.043 CC lib/rdma_utils/rdma_utils.o 00:03:08.043 CC lib/vmd/led.o 00:03:08.043 LIB libspdk_conf.a 00:03:08.043 CC lib/env_dpdk/memory.o 00:03:08.043 CC lib/env_dpdk/pci.o 00:03:08.043 CC lib/env_dpdk/init.o 00:03:08.043 LIB libspdk_json.a 00:03:08.043 CC lib/env_dpdk/threads.o 00:03:08.043 SO libspdk_conf.so.6.0 00:03:08.043 SO libspdk_json.so.6.0 00:03:08.043 LIB libspdk_rdma_utils.a 00:03:08.043 SYMLINK libspdk_conf.so 00:03:08.043 SO libspdk_rdma_utils.so.1.0 00:03:08.043 SYMLINK libspdk_json.so 00:03:08.043 CC lib/env_dpdk/pci_ioat.o 00:03:08.043 SYMLINK libspdk_rdma_utils.so 00:03:08.043 CC lib/env_dpdk/pci_virtio.o 00:03:08.043 CC lib/env_dpdk/pci_vmd.o 00:03:08.043 CC lib/jsonrpc/jsonrpc_server.o 00:03:08.043 CC lib/rdma_provider/common.o 00:03:08.043 CC lib/env_dpdk/pci_idxd.o 00:03:08.043 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:08.043 CC lib/jsonrpc/jsonrpc_client.o 00:03:08.043 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:08.043 LIB libspdk_vmd.a 00:03:08.043 SO libspdk_vmd.so.6.0 00:03:08.043 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:08.043 SYMLINK libspdk_vmd.so 00:03:08.043 CC lib/env_dpdk/pci_event.o 00:03:08.043 LIB libspdk_idxd.a 00:03:08.043 CC lib/env_dpdk/sigbus_handler.o 00:03:08.043 CC lib/env_dpdk/pci_dpdk.o 00:03:08.043 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:08.043 SO libspdk_idxd.so.12.1 00:03:08.043 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:08.043 LIB libspdk_jsonrpc.a 00:03:08.043 SYMLINK libspdk_idxd.so 00:03:08.043 SO libspdk_jsonrpc.so.6.0 00:03:08.043 LIB libspdk_rdma_provider.a 00:03:08.043 SO libspdk_rdma_provider.so.7.0 00:03:08.043 SYMLINK libspdk_jsonrpc.so 00:03:08.043 SYMLINK libspdk_rdma_provider.so 00:03:08.043 CC lib/rpc/rpc.o 00:03:08.043 LIB libspdk_rpc.a 00:03:08.301 SO libspdk_rpc.so.6.0 00:03:08.301 SYMLINK libspdk_rpc.so 00:03:08.301 LIB libspdk_env_dpdk.a 00:03:08.301 SO libspdk_env_dpdk.so.15.1 00:03:08.301 CC lib/keyring/keyring.o 00:03:08.301 CC lib/keyring/keyring_rpc.o 00:03:08.301 CC lib/trace/trace.o 00:03:08.301 CC lib/trace/trace_flags.o 00:03:08.301 CC lib/trace/trace_rpc.o 00:03:08.301 CC lib/notify/notify.o 00:03:08.301 CC lib/notify/notify_rpc.o 00:03:08.559 SYMLINK libspdk_env_dpdk.so 00:03:08.559 LIB libspdk_notify.a 00:03:08.559 LIB libspdk_keyring.a 00:03:08.559 SO libspdk_notify.so.6.0 00:03:08.559 SO libspdk_keyring.so.2.0 00:03:08.559 SYMLINK libspdk_notify.so 00:03:08.559 LIB libspdk_trace.a 00:03:08.559 SYMLINK libspdk_keyring.so 00:03:08.817 SO libspdk_trace.so.11.0 00:03:08.817 SYMLINK libspdk_trace.so 00:03:09.075 CC lib/sock/sock_rpc.o 00:03:09.075 CC lib/sock/sock.o 00:03:09.075 CC lib/thread/thread.o 00:03:09.075 CC lib/thread/iobuf.o 00:03:09.333 LIB libspdk_sock.a 00:03:09.333 SO libspdk_sock.so.10.0 00:03:09.591 SYMLINK libspdk_sock.so 00:03:09.849 CC lib/nvme/nvme_ctrlr.o 00:03:09.849 CC lib/nvme/nvme_ns.o 00:03:09.849 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:09.849 CC lib/nvme/nvme_pcie_common.o 00:03:09.849 CC lib/nvme/nvme_fabric.o 00:03:09.849 CC lib/nvme/nvme_ns_cmd.o 00:03:09.849 CC lib/nvme/nvme_pcie.o 00:03:09.849 CC lib/nvme/nvme.o 00:03:09.849 CC lib/nvme/nvme_qpair.o 00:03:10.107 CC lib/nvme/nvme_quirks.o 00:03:10.365 CC lib/nvme/nvme_transport.o 00:03:10.365 CC lib/nvme/nvme_discovery.o 00:03:10.365 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:10.365 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:10.365 LIB libspdk_thread.a 00:03:10.365 CC lib/nvme/nvme_tcp.o 00:03:10.623 SO libspdk_thread.so.11.0 00:03:10.623 CC lib/nvme/nvme_opal.o 00:03:10.623 SYMLINK libspdk_thread.so 00:03:10.623 CC lib/nvme/nvme_io_msg.o 00:03:10.623 CC lib/nvme/nvme_poll_group.o 00:03:10.623 CC lib/accel/accel.o 00:03:10.880 CC lib/blob/blobstore.o 00:03:10.880 CC lib/blob/request.o 00:03:10.880 CC lib/blob/zeroes.o 00:03:10.880 CC lib/blob/blob_bs_dev.o 00:03:11.138 CC lib/accel/accel_rpc.o 00:03:11.138 CC lib/nvme/nvme_zns.o 00:03:11.138 CC lib/nvme/nvme_stubs.o 00:03:11.138 CC lib/nvme/nvme_auth.o 00:03:11.138 CC lib/nvme/nvme_cuse.o 00:03:11.396 CC lib/init/json_config.o 00:03:11.396 CC lib/virtio/virtio.o 00:03:11.396 CC lib/nvme/nvme_rdma.o 00:03:11.396 CC lib/accel/accel_sw.o 00:03:11.653 CC lib/init/subsystem.o 00:03:11.653 CC lib/init/subsystem_rpc.o 00:03:11.653 CC lib/init/rpc.o 00:03:11.653 LIB libspdk_accel.a 00:03:11.653 CC lib/virtio/virtio_vhost_user.o 00:03:11.653 SO libspdk_accel.so.16.0 00:03:11.653 LIB libspdk_init.a 00:03:11.911 SO libspdk_init.so.6.0 00:03:11.911 SYMLINK libspdk_accel.so 00:03:11.911 CC lib/virtio/virtio_vfio_user.o 00:03:11.911 SYMLINK libspdk_init.so 00:03:11.911 CC lib/virtio/virtio_pci.o 00:03:11.911 CC lib/fsdev/fsdev.o 00:03:11.911 CC lib/bdev/bdev.o 00:03:11.911 CC lib/bdev/bdev_rpc.o 00:03:11.911 CC lib/bdev/bdev_zone.o 00:03:11.911 CC lib/event/app.o 00:03:11.911 CC lib/event/reactor.o 00:03:12.170 CC lib/event/log_rpc.o 00:03:12.170 LIB libspdk_virtio.a 00:03:12.170 SO libspdk_virtio.so.7.0 00:03:12.170 CC lib/fsdev/fsdev_io.o 00:03:12.170 SYMLINK libspdk_virtio.so 00:03:12.170 CC lib/bdev/part.o 00:03:12.170 CC lib/event/app_rpc.o 00:03:12.170 CC lib/bdev/scsi_nvme.o 00:03:12.427 CC lib/event/scheduler_static.o 00:03:12.427 CC lib/fsdev/fsdev_rpc.o 00:03:12.427 LIB libspdk_fsdev.a 00:03:12.427 LIB libspdk_event.a 00:03:12.427 SO libspdk_fsdev.so.2.0 00:03:12.427 SO libspdk_event.so.14.0 00:03:12.427 SYMLINK libspdk_fsdev.so 00:03:12.686 SYMLINK libspdk_event.so 00:03:12.686 LIB libspdk_nvme.a 00:03:12.686 SO libspdk_nvme.so.15.0 00:03:12.686 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:12.943 SYMLINK libspdk_nvme.so 00:03:13.512 LIB libspdk_fuse_dispatcher.a 00:03:13.512 SO libspdk_fuse_dispatcher.so.1.0 00:03:13.512 SYMLINK libspdk_fuse_dispatcher.so 00:03:13.770 LIB libspdk_blob.a 00:03:13.770 SO libspdk_blob.so.12.0 00:03:13.770 SYMLINK libspdk_blob.so 00:03:14.028 CC lib/lvol/lvol.o 00:03:14.028 CC lib/blobfs/tree.o 00:03:14.028 CC lib/blobfs/blobfs.o 00:03:14.594 LIB libspdk_bdev.a 00:03:14.594 SO libspdk_bdev.so.17.0 00:03:14.594 SYMLINK libspdk_bdev.so 00:03:14.852 CC lib/nvmf/ctrlr.o 00:03:14.852 CC lib/nvmf/ctrlr_discovery.o 00:03:14.852 CC lib/nvmf/ctrlr_bdev.o 00:03:14.852 CC lib/ublk/ublk.o 00:03:14.852 CC lib/nvmf/subsystem.o 00:03:14.852 CC lib/ftl/ftl_core.o 00:03:14.852 CC lib/scsi/dev.o 00:03:14.852 CC lib/nbd/nbd.o 00:03:14.852 LIB libspdk_blobfs.a 00:03:14.852 SO libspdk_blobfs.so.11.0 00:03:14.852 CC lib/scsi/lun.o 00:03:15.111 SYMLINK libspdk_blobfs.so 00:03:15.111 CC lib/scsi/port.o 00:03:15.111 LIB libspdk_lvol.a 00:03:15.111 SO libspdk_lvol.so.11.0 00:03:15.111 SYMLINK libspdk_lvol.so 00:03:15.111 CC lib/nbd/nbd_rpc.o 00:03:15.111 CC lib/ftl/ftl_init.o 00:03:15.111 CC lib/ftl/ftl_layout.o 00:03:15.111 CC lib/ftl/ftl_debug.o 00:03:15.371 CC lib/ftl/ftl_io.o 00:03:15.371 CC lib/scsi/scsi.o 00:03:15.371 LIB libspdk_nbd.a 00:03:15.371 SO libspdk_nbd.so.7.0 00:03:15.371 CC lib/scsi/scsi_bdev.o 00:03:15.371 SYMLINK libspdk_nbd.so 00:03:15.371 CC lib/scsi/scsi_pr.o 00:03:15.371 CC lib/ftl/ftl_sb.o 00:03:15.371 CC lib/ftl/ftl_l2p.o 00:03:15.371 CC lib/ublk/ublk_rpc.o 00:03:15.371 CC lib/scsi/scsi_rpc.o 00:03:15.371 CC lib/scsi/task.o 00:03:15.371 CC lib/ftl/ftl_l2p_flat.o 00:03:15.632 CC lib/ftl/ftl_nv_cache.o 00:03:15.632 LIB libspdk_ublk.a 00:03:15.632 CC lib/ftl/ftl_band.o 00:03:15.632 CC lib/nvmf/nvmf.o 00:03:15.632 SO libspdk_ublk.so.3.0 00:03:15.632 CC lib/nvmf/nvmf_rpc.o 00:03:15.632 SYMLINK libspdk_ublk.so 00:03:15.632 CC lib/nvmf/transport.o 00:03:15.632 CC lib/ftl/ftl_band_ops.o 00:03:15.632 CC lib/nvmf/tcp.o 00:03:15.892 LIB libspdk_scsi.a 00:03:15.892 SO libspdk_scsi.so.9.0 00:03:15.892 CC lib/nvmf/stubs.o 00:03:15.892 SYMLINK libspdk_scsi.so 00:03:15.892 CC lib/nvmf/mdns_server.o 00:03:15.892 CC lib/nvmf/rdma.o 00:03:16.150 CC lib/iscsi/conn.o 00:03:16.150 CC lib/nvmf/auth.o 00:03:16.409 CC lib/ftl/ftl_writer.o 00:03:16.409 CC lib/iscsi/init_grp.o 00:03:16.409 CC lib/ftl/ftl_rq.o 00:03:16.409 CC lib/ftl/ftl_reloc.o 00:03:16.667 CC lib/iscsi/iscsi.o 00:03:16.667 CC lib/vhost/vhost.o 00:03:16.667 CC lib/iscsi/param.o 00:03:16.667 CC lib/ftl/ftl_l2p_cache.o 00:03:16.667 CC lib/vhost/vhost_rpc.o 00:03:16.667 CC lib/vhost/vhost_scsi.o 00:03:16.667 CC lib/ftl/ftl_p2l.o 00:03:16.926 CC lib/iscsi/portal_grp.o 00:03:16.926 CC lib/iscsi/tgt_node.o 00:03:17.184 CC lib/iscsi/iscsi_subsystem.o 00:03:17.184 CC lib/vhost/vhost_blk.o 00:03:17.184 CC lib/ftl/ftl_p2l_log.o 00:03:17.184 CC lib/vhost/rte_vhost_user.o 00:03:17.184 CC lib/iscsi/iscsi_rpc.o 00:03:17.184 CC lib/iscsi/task.o 00:03:17.453 CC lib/ftl/mngt/ftl_mngt.o 00:03:17.453 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:17.453 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:17.453 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:17.453 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:17.454 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:17.454 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:17.722 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:17.722 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:17.722 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:17.722 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:17.722 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:17.722 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:17.722 LIB libspdk_iscsi.a 00:03:17.722 CC lib/ftl/utils/ftl_conf.o 00:03:17.722 SO libspdk_iscsi.so.8.0 00:03:17.722 CC lib/ftl/utils/ftl_md.o 00:03:17.722 CC lib/ftl/utils/ftl_mempool.o 00:03:17.980 CC lib/ftl/utils/ftl_bitmap.o 00:03:17.980 CC lib/ftl/utils/ftl_property.o 00:03:17.980 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:17.980 SYMLINK libspdk_iscsi.so 00:03:17.980 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:17.980 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:17.980 LIB libspdk_nvmf.a 00:03:17.980 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:17.980 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:17.980 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:17.980 SO libspdk_nvmf.so.20.0 00:03:17.980 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:17.980 LIB libspdk_vhost.a 00:03:17.980 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:17.980 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:17.980 SO libspdk_vhost.so.8.0 00:03:18.236 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:18.236 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:18.236 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:18.236 SYMLINK libspdk_vhost.so 00:03:18.236 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:18.236 CC lib/ftl/base/ftl_base_dev.o 00:03:18.236 CC lib/ftl/base/ftl_base_bdev.o 00:03:18.236 CC lib/ftl/ftl_trace.o 00:03:18.236 SYMLINK libspdk_nvmf.so 00:03:18.494 LIB libspdk_ftl.a 00:03:18.494 SO libspdk_ftl.so.9.0 00:03:18.752 SYMLINK libspdk_ftl.so 00:03:19.010 CC module/env_dpdk/env_dpdk_rpc.o 00:03:19.010 CC module/accel/ioat/accel_ioat.o 00:03:19.010 CC module/accel/dsa/accel_dsa.o 00:03:19.010 CC module/fsdev/aio/fsdev_aio.o 00:03:19.010 CC module/accel/error/accel_error.o 00:03:19.010 CC module/keyring/file/keyring.o 00:03:19.010 CC module/sock/posix/posix.o 00:03:19.010 CC module/keyring/linux/keyring.o 00:03:19.010 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:19.010 CC module/blob/bdev/blob_bdev.o 00:03:19.010 LIB libspdk_env_dpdk_rpc.a 00:03:19.010 SO libspdk_env_dpdk_rpc.so.6.0 00:03:19.267 SYMLINK libspdk_env_dpdk_rpc.so 00:03:19.267 CC module/keyring/file/keyring_rpc.o 00:03:19.267 CC module/keyring/linux/keyring_rpc.o 00:03:19.267 CC module/accel/ioat/accel_ioat_rpc.o 00:03:19.267 LIB libspdk_scheduler_dynamic.a 00:03:19.267 SO libspdk_scheduler_dynamic.so.4.0 00:03:19.267 CC module/accel/error/accel_error_rpc.o 00:03:19.267 LIB libspdk_keyring_file.a 00:03:19.267 SYMLINK libspdk_scheduler_dynamic.so 00:03:19.267 SO libspdk_keyring_file.so.2.0 00:03:19.267 LIB libspdk_keyring_linux.a 00:03:19.267 CC module/accel/iaa/accel_iaa.o 00:03:19.267 LIB libspdk_blob_bdev.a 00:03:19.267 LIB libspdk_accel_ioat.a 00:03:19.267 SO libspdk_keyring_linux.so.1.0 00:03:19.267 CC module/accel/dsa/accel_dsa_rpc.o 00:03:19.267 SO libspdk_blob_bdev.so.12.0 00:03:19.267 SYMLINK libspdk_keyring_file.so 00:03:19.267 SO libspdk_accel_ioat.so.6.0 00:03:19.267 LIB libspdk_accel_error.a 00:03:19.267 SYMLINK libspdk_keyring_linux.so 00:03:19.525 SYMLINK libspdk_blob_bdev.so 00:03:19.525 SYMLINK libspdk_accel_ioat.so 00:03:19.525 SO libspdk_accel_error.so.2.0 00:03:19.525 CC module/accel/iaa/accel_iaa_rpc.o 00:03:19.525 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:19.525 CC module/fsdev/aio/linux_aio_mgr.o 00:03:19.525 LIB libspdk_accel_dsa.a 00:03:19.525 SYMLINK libspdk_accel_error.so 00:03:19.525 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:19.525 SO libspdk_accel_dsa.so.5.0 00:03:19.525 LIB libspdk_accel_iaa.a 00:03:19.525 SO libspdk_accel_iaa.so.3.0 00:03:19.525 SYMLINK libspdk_accel_dsa.so 00:03:19.525 CC module/scheduler/gscheduler/gscheduler.o 00:03:19.525 SYMLINK libspdk_accel_iaa.so 00:03:19.525 LIB libspdk_scheduler_dpdk_governor.a 00:03:19.525 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:19.525 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:19.783 CC module/bdev/delay/vbdev_delay.o 00:03:19.783 LIB libspdk_scheduler_gscheduler.a 00:03:19.783 CC module/bdev/error/vbdev_error.o 00:03:19.783 CC module/blobfs/bdev/blobfs_bdev.o 00:03:19.783 CC module/bdev/gpt/gpt.o 00:03:19.783 SO libspdk_scheduler_gscheduler.so.4.0 00:03:19.783 LIB libspdk_fsdev_aio.a 00:03:19.783 CC module/bdev/lvol/vbdev_lvol.o 00:03:19.783 CC module/bdev/malloc/bdev_malloc.o 00:03:19.783 SO libspdk_fsdev_aio.so.1.0 00:03:19.783 SYMLINK libspdk_scheduler_gscheduler.so 00:03:19.783 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:19.783 SYMLINK libspdk_fsdev_aio.so 00:03:19.783 CC module/bdev/gpt/vbdev_gpt.o 00:03:19.783 CC module/bdev/null/bdev_null.o 00:03:19.783 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:19.783 LIB libspdk_sock_posix.a 00:03:19.783 SO libspdk_sock_posix.so.6.0 00:03:19.783 CC module/bdev/error/vbdev_error_rpc.o 00:03:19.783 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:19.783 SYMLINK libspdk_sock_posix.so 00:03:20.041 CC module/bdev/null/bdev_null_rpc.o 00:03:20.041 LIB libspdk_blobfs_bdev.a 00:03:20.041 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:20.041 SO libspdk_blobfs_bdev.so.6.0 00:03:20.041 LIB libspdk_bdev_error.a 00:03:20.041 LIB libspdk_bdev_malloc.a 00:03:20.041 SO libspdk_bdev_error.so.6.0 00:03:20.041 SYMLINK libspdk_blobfs_bdev.so 00:03:20.041 CC module/bdev/nvme/bdev_nvme.o 00:03:20.041 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:20.041 SO libspdk_bdev_malloc.so.6.0 00:03:20.041 LIB libspdk_bdev_delay.a 00:03:20.041 SO libspdk_bdev_delay.so.6.0 00:03:20.041 LIB libspdk_bdev_gpt.a 00:03:20.041 SYMLINK libspdk_bdev_error.so 00:03:20.041 SYMLINK libspdk_bdev_malloc.so 00:03:20.041 CC module/bdev/nvme/nvme_rpc.o 00:03:20.041 CC module/bdev/nvme/bdev_mdns_client.o 00:03:20.041 LIB libspdk_bdev_null.a 00:03:20.041 SO libspdk_bdev_gpt.so.6.0 00:03:20.041 SYMLINK libspdk_bdev_delay.so 00:03:20.041 SO libspdk_bdev_null.so.6.0 00:03:20.041 SYMLINK libspdk_bdev_null.so 00:03:20.041 SYMLINK libspdk_bdev_gpt.so 00:03:20.299 CC module/bdev/passthru/vbdev_passthru.o 00:03:20.299 CC module/bdev/nvme/vbdev_opal.o 00:03:20.299 LIB libspdk_bdev_lvol.a 00:03:20.299 CC module/bdev/raid/bdev_raid.o 00:03:20.299 CC module/bdev/raid/bdev_raid_rpc.o 00:03:20.299 SO libspdk_bdev_lvol.so.6.0 00:03:20.299 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:20.299 CC module/bdev/split/vbdev_split.o 00:03:20.299 SYMLINK libspdk_bdev_lvol.so 00:03:20.299 CC module/bdev/split/vbdev_split_rpc.o 00:03:20.299 CC module/bdev/xnvme/bdev_xnvme.o 00:03:20.299 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:20.299 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:20.299 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:20.556 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:20.556 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:20.556 LIB libspdk_bdev_split.a 00:03:20.556 CC module/bdev/raid/bdev_raid_sb.o 00:03:20.556 SO libspdk_bdev_split.so.6.0 00:03:20.556 CC module/bdev/raid/raid0.o 00:03:20.556 LIB libspdk_bdev_passthru.a 00:03:20.556 CC module/bdev/raid/raid1.o 00:03:20.556 SO libspdk_bdev_passthru.so.6.0 00:03:20.556 LIB libspdk_bdev_zone_block.a 00:03:20.556 SYMLINK libspdk_bdev_split.so 00:03:20.556 CC module/bdev/raid/concat.o 00:03:20.556 SO libspdk_bdev_zone_block.so.6.0 00:03:20.556 LIB libspdk_bdev_xnvme.a 00:03:20.556 SYMLINK libspdk_bdev_passthru.so 00:03:20.556 SO libspdk_bdev_xnvme.so.3.0 00:03:20.556 SYMLINK libspdk_bdev_zone_block.so 00:03:20.814 SYMLINK libspdk_bdev_xnvme.so 00:03:20.814 CC module/bdev/aio/bdev_aio.o 00:03:20.814 CC module/bdev/aio/bdev_aio_rpc.o 00:03:20.814 CC module/bdev/iscsi/bdev_iscsi.o 00:03:20.814 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:20.814 CC module/bdev/ftl/bdev_ftl.o 00:03:20.814 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:20.814 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:20.814 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:20.814 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:21.072 LIB libspdk_bdev_ftl.a 00:03:21.072 LIB libspdk_bdev_aio.a 00:03:21.072 SO libspdk_bdev_ftl.so.6.0 00:03:21.072 SO libspdk_bdev_aio.so.6.0 00:03:21.072 LIB libspdk_bdev_iscsi.a 00:03:21.072 SYMLINK libspdk_bdev_ftl.so 00:03:21.072 SO libspdk_bdev_iscsi.so.6.0 00:03:21.072 SYMLINK libspdk_bdev_aio.so 00:03:21.330 SYMLINK libspdk_bdev_iscsi.so 00:03:21.330 LIB libspdk_bdev_raid.a 00:03:21.330 SO libspdk_bdev_raid.so.6.0 00:03:21.330 LIB libspdk_bdev_virtio.a 00:03:21.330 SYMLINK libspdk_bdev_raid.so 00:03:21.330 SO libspdk_bdev_virtio.so.6.0 00:03:21.588 SYMLINK libspdk_bdev_virtio.so 00:03:22.156 LIB libspdk_bdev_nvme.a 00:03:22.156 SO libspdk_bdev_nvme.so.7.1 00:03:22.156 SYMLINK libspdk_bdev_nvme.so 00:03:22.725 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:22.725 CC module/event/subsystems/fsdev/fsdev.o 00:03:22.725 CC module/event/subsystems/iobuf/iobuf.o 00:03:22.725 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:22.725 CC module/event/subsystems/keyring/keyring.o 00:03:22.725 CC module/event/subsystems/sock/sock.o 00:03:22.725 CC module/event/subsystems/vmd/vmd.o 00:03:22.725 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:22.725 CC module/event/subsystems/scheduler/scheduler.o 00:03:22.725 LIB libspdk_event_fsdev.a 00:03:22.725 LIB libspdk_event_iobuf.a 00:03:22.725 LIB libspdk_event_keyring.a 00:03:22.725 LIB libspdk_event_vhost_blk.a 00:03:22.725 LIB libspdk_event_scheduler.a 00:03:22.725 LIB libspdk_event_sock.a 00:03:22.725 SO libspdk_event_fsdev.so.1.0 00:03:22.725 SO libspdk_event_keyring.so.1.0 00:03:22.725 SO libspdk_event_vhost_blk.so.3.0 00:03:22.725 LIB libspdk_event_vmd.a 00:03:22.725 SO libspdk_event_iobuf.so.3.0 00:03:22.725 SO libspdk_event_scheduler.so.4.0 00:03:22.725 SO libspdk_event_sock.so.5.0 00:03:22.725 SO libspdk_event_vmd.so.6.0 00:03:22.725 SYMLINK libspdk_event_vhost_blk.so 00:03:22.725 SYMLINK libspdk_event_fsdev.so 00:03:22.725 SYMLINK libspdk_event_scheduler.so 00:03:22.725 SYMLINK libspdk_event_keyring.so 00:03:22.725 SYMLINK libspdk_event_iobuf.so 00:03:22.725 SYMLINK libspdk_event_sock.so 00:03:22.725 SYMLINK libspdk_event_vmd.so 00:03:22.985 CC module/event/subsystems/accel/accel.o 00:03:23.246 LIB libspdk_event_accel.a 00:03:23.246 SO libspdk_event_accel.so.6.0 00:03:23.246 SYMLINK libspdk_event_accel.so 00:03:23.507 CC module/event/subsystems/bdev/bdev.o 00:03:23.766 LIB libspdk_event_bdev.a 00:03:23.766 SO libspdk_event_bdev.so.6.0 00:03:23.766 SYMLINK libspdk_event_bdev.so 00:03:24.024 CC module/event/subsystems/nbd/nbd.o 00:03:24.024 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:24.024 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:24.024 CC module/event/subsystems/ublk/ublk.o 00:03:24.024 CC module/event/subsystems/scsi/scsi.o 00:03:24.024 LIB libspdk_event_nbd.a 00:03:24.024 LIB libspdk_event_ublk.a 00:03:24.024 SO libspdk_event_nbd.so.6.0 00:03:24.024 SO libspdk_event_ublk.so.3.0 00:03:24.024 LIB libspdk_event_scsi.a 00:03:24.024 SYMLINK libspdk_event_nbd.so 00:03:24.024 SO libspdk_event_scsi.so.6.0 00:03:24.024 SYMLINK libspdk_event_ublk.so 00:03:24.024 LIB libspdk_event_nvmf.a 00:03:24.024 SYMLINK libspdk_event_scsi.so 00:03:24.024 SO libspdk_event_nvmf.so.6.0 00:03:24.281 SYMLINK libspdk_event_nvmf.so 00:03:24.281 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:24.281 CC module/event/subsystems/iscsi/iscsi.o 00:03:24.540 LIB libspdk_event_vhost_scsi.a 00:03:24.540 LIB libspdk_event_iscsi.a 00:03:24.540 SO libspdk_event_vhost_scsi.so.3.0 00:03:24.540 SO libspdk_event_iscsi.so.6.0 00:03:24.540 SYMLINK libspdk_event_vhost_scsi.so 00:03:24.540 SYMLINK libspdk_event_iscsi.so 00:03:24.798 SO libspdk.so.6.0 00:03:24.798 SYMLINK libspdk.so 00:03:24.798 TEST_HEADER include/spdk/accel.h 00:03:24.798 TEST_HEADER include/spdk/accel_module.h 00:03:24.798 TEST_HEADER include/spdk/assert.h 00:03:24.798 TEST_HEADER include/spdk/barrier.h 00:03:24.798 TEST_HEADER include/spdk/base64.h 00:03:24.798 TEST_HEADER include/spdk/bdev.h 00:03:24.798 CC app/trace_record/trace_record.o 00:03:24.798 TEST_HEADER include/spdk/bdev_module.h 00:03:24.798 TEST_HEADER include/spdk/bdev_zone.h 00:03:24.798 TEST_HEADER include/spdk/bit_array.h 00:03:24.798 CXX app/trace/trace.o 00:03:24.798 TEST_HEADER include/spdk/bit_pool.h 00:03:24.798 TEST_HEADER include/spdk/blob_bdev.h 00:03:24.798 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:24.798 TEST_HEADER include/spdk/blobfs.h 00:03:24.798 TEST_HEADER include/spdk/blob.h 00:03:24.798 TEST_HEADER include/spdk/conf.h 00:03:24.798 TEST_HEADER include/spdk/config.h 00:03:24.798 TEST_HEADER include/spdk/cpuset.h 00:03:25.057 TEST_HEADER include/spdk/crc16.h 00:03:25.057 TEST_HEADER include/spdk/crc32.h 00:03:25.057 TEST_HEADER include/spdk/crc64.h 00:03:25.057 TEST_HEADER include/spdk/dif.h 00:03:25.057 TEST_HEADER include/spdk/dma.h 00:03:25.057 TEST_HEADER include/spdk/endian.h 00:03:25.057 TEST_HEADER include/spdk/env_dpdk.h 00:03:25.057 CC app/nvmf_tgt/nvmf_main.o 00:03:25.057 TEST_HEADER include/spdk/env.h 00:03:25.057 TEST_HEADER include/spdk/event.h 00:03:25.057 TEST_HEADER include/spdk/fd_group.h 00:03:25.057 TEST_HEADER include/spdk/fd.h 00:03:25.057 TEST_HEADER include/spdk/file.h 00:03:25.057 TEST_HEADER include/spdk/fsdev.h 00:03:25.057 TEST_HEADER include/spdk/fsdev_module.h 00:03:25.057 TEST_HEADER include/spdk/ftl.h 00:03:25.057 TEST_HEADER include/spdk/gpt_spec.h 00:03:25.057 TEST_HEADER include/spdk/hexlify.h 00:03:25.057 TEST_HEADER include/spdk/histogram_data.h 00:03:25.057 TEST_HEADER include/spdk/idxd.h 00:03:25.057 TEST_HEADER include/spdk/idxd_spec.h 00:03:25.057 TEST_HEADER include/spdk/init.h 00:03:25.057 TEST_HEADER include/spdk/ioat.h 00:03:25.057 CC examples/ioat/perf/perf.o 00:03:25.057 CC examples/util/zipf/zipf.o 00:03:25.057 TEST_HEADER include/spdk/ioat_spec.h 00:03:25.057 TEST_HEADER include/spdk/iscsi_spec.h 00:03:25.057 CC test/thread/poller_perf/poller_perf.o 00:03:25.057 TEST_HEADER include/spdk/json.h 00:03:25.057 TEST_HEADER include/spdk/jsonrpc.h 00:03:25.057 TEST_HEADER include/spdk/keyring.h 00:03:25.057 TEST_HEADER include/spdk/keyring_module.h 00:03:25.057 TEST_HEADER include/spdk/likely.h 00:03:25.057 TEST_HEADER include/spdk/log.h 00:03:25.057 TEST_HEADER include/spdk/lvol.h 00:03:25.057 TEST_HEADER include/spdk/md5.h 00:03:25.057 TEST_HEADER include/spdk/memory.h 00:03:25.057 TEST_HEADER include/spdk/mmio.h 00:03:25.057 TEST_HEADER include/spdk/nbd.h 00:03:25.057 TEST_HEADER include/spdk/net.h 00:03:25.057 TEST_HEADER include/spdk/notify.h 00:03:25.057 TEST_HEADER include/spdk/nvme.h 00:03:25.057 TEST_HEADER include/spdk/nvme_intel.h 00:03:25.057 CC test/dma/test_dma/test_dma.o 00:03:25.057 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:25.057 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:25.057 TEST_HEADER include/spdk/nvme_spec.h 00:03:25.057 CC test/app/bdev_svc/bdev_svc.o 00:03:25.057 TEST_HEADER include/spdk/nvme_zns.h 00:03:25.057 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:25.057 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:25.057 TEST_HEADER include/spdk/nvmf.h 00:03:25.057 TEST_HEADER include/spdk/nvmf_spec.h 00:03:25.057 TEST_HEADER include/spdk/nvmf_transport.h 00:03:25.057 TEST_HEADER include/spdk/opal.h 00:03:25.057 TEST_HEADER include/spdk/opal_spec.h 00:03:25.057 TEST_HEADER include/spdk/pci_ids.h 00:03:25.057 TEST_HEADER include/spdk/pipe.h 00:03:25.057 TEST_HEADER include/spdk/queue.h 00:03:25.057 TEST_HEADER include/spdk/reduce.h 00:03:25.057 TEST_HEADER include/spdk/rpc.h 00:03:25.057 TEST_HEADER include/spdk/scheduler.h 00:03:25.057 TEST_HEADER include/spdk/scsi.h 00:03:25.057 TEST_HEADER include/spdk/scsi_spec.h 00:03:25.057 TEST_HEADER include/spdk/sock.h 00:03:25.057 TEST_HEADER include/spdk/stdinc.h 00:03:25.057 TEST_HEADER include/spdk/string.h 00:03:25.057 TEST_HEADER include/spdk/thread.h 00:03:25.057 TEST_HEADER include/spdk/trace.h 00:03:25.057 TEST_HEADER include/spdk/trace_parser.h 00:03:25.057 TEST_HEADER include/spdk/tree.h 00:03:25.057 TEST_HEADER include/spdk/ublk.h 00:03:25.057 TEST_HEADER include/spdk/util.h 00:03:25.057 TEST_HEADER include/spdk/uuid.h 00:03:25.057 CC test/env/mem_callbacks/mem_callbacks.o 00:03:25.057 TEST_HEADER include/spdk/version.h 00:03:25.057 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:25.057 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:25.057 TEST_HEADER include/spdk/vhost.h 00:03:25.057 TEST_HEADER include/spdk/vmd.h 00:03:25.057 TEST_HEADER include/spdk/xor.h 00:03:25.057 TEST_HEADER include/spdk/zipf.h 00:03:25.057 CXX test/cpp_headers/accel.o 00:03:25.057 LINK nvmf_tgt 00:03:25.057 LINK spdk_trace_record 00:03:25.057 LINK zipf 00:03:25.057 LINK poller_perf 00:03:25.057 LINK bdev_svc 00:03:25.057 LINK ioat_perf 00:03:25.315 CXX test/cpp_headers/accel_module.o 00:03:25.315 LINK spdk_trace 00:03:25.315 CC examples/ioat/verify/verify.o 00:03:25.315 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:25.315 LINK test_dma 00:03:25.315 CXX test/cpp_headers/assert.o 00:03:25.315 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:25.315 CC examples/thread/thread/thread_ex.o 00:03:25.315 CC examples/sock/hello_world/hello_sock.o 00:03:25.315 LINK verify 00:03:25.587 CC examples/vmd/lsvmd/lsvmd.o 00:03:25.587 CXX test/cpp_headers/barrier.o 00:03:25.587 CC app/iscsi_tgt/iscsi_tgt.o 00:03:25.587 LINK interrupt_tgt 00:03:25.587 LINK mem_callbacks 00:03:25.587 LINK lsvmd 00:03:25.587 LINK thread 00:03:25.587 CC app/spdk_lspci/spdk_lspci.o 00:03:25.587 CXX test/cpp_headers/base64.o 00:03:25.587 CC app/spdk_tgt/spdk_tgt.o 00:03:25.587 LINK iscsi_tgt 00:03:25.587 LINK hello_sock 00:03:25.587 CC test/env/vtophys/vtophys.o 00:03:25.587 LINK spdk_lspci 00:03:25.587 LINK nvme_fuzz 00:03:25.846 CC app/spdk_nvme_perf/perf.o 00:03:25.846 CC examples/vmd/led/led.o 00:03:25.846 CXX test/cpp_headers/bdev.o 00:03:25.846 CC app/spdk_nvme_identify/identify.o 00:03:25.846 LINK vtophys 00:03:25.846 LINK spdk_tgt 00:03:25.846 LINK led 00:03:25.846 CC test/app/histogram_perf/histogram_perf.o 00:03:25.846 CXX test/cpp_headers/bdev_module.o 00:03:25.846 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:26.104 CC examples/idxd/perf/perf.o 00:03:26.104 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:26.104 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:26.104 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:26.104 LINK histogram_perf 00:03:26.104 CXX test/cpp_headers/bdev_zone.o 00:03:26.104 CC examples/accel/perf/accel_perf.o 00:03:26.104 LINK env_dpdk_post_init 00:03:26.104 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:26.104 CXX test/cpp_headers/bit_array.o 00:03:26.363 LINK idxd_perf 00:03:26.363 LINK hello_fsdev 00:03:26.363 CXX test/cpp_headers/bit_pool.o 00:03:26.363 CC test/env/memory/memory_ut.o 00:03:26.363 LINK spdk_nvme_perf 00:03:26.363 CC test/env/pci/pci_ut.o 00:03:26.363 CXX test/cpp_headers/blob_bdev.o 00:03:26.363 CC examples/blob/hello_world/hello_blob.o 00:03:26.363 CXX test/cpp_headers/blobfs_bdev.o 00:03:26.622 LINK accel_perf 00:03:26.622 CXX test/cpp_headers/blobfs.o 00:03:26.622 LINK vhost_fuzz 00:03:26.622 LINK hello_blob 00:03:26.622 LINK spdk_nvme_identify 00:03:26.622 CXX test/cpp_headers/blob.o 00:03:26.622 CC app/spdk_nvme_discover/discovery_aer.o 00:03:26.622 CC app/spdk_top/spdk_top.o 00:03:26.622 CC examples/nvme/hello_world/hello_world.o 00:03:26.880 CXX test/cpp_headers/conf.o 00:03:26.880 CC examples/nvme/reconnect/reconnect.o 00:03:26.880 CC examples/blob/cli/blobcli.o 00:03:26.880 LINK spdk_nvme_discover 00:03:26.880 CC examples/bdev/hello_world/hello_bdev.o 00:03:26.880 CXX test/cpp_headers/config.o 00:03:26.880 LINK pci_ut 00:03:26.880 CXX test/cpp_headers/cpuset.o 00:03:26.880 LINK hello_world 00:03:26.880 CXX test/cpp_headers/crc16.o 00:03:27.139 CC test/rpc_client/rpc_client_test.o 00:03:27.139 LINK reconnect 00:03:27.139 LINK hello_bdev 00:03:27.139 CXX test/cpp_headers/crc32.o 00:03:27.139 LINK rpc_client_test 00:03:27.139 CC test/accel/dif/dif.o 00:03:27.139 LINK iscsi_fuzz 00:03:27.139 CC test/blobfs/mkfs/mkfs.o 00:03:27.139 CXX test/cpp_headers/crc64.o 00:03:27.139 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:27.397 LINK blobcli 00:03:27.397 CC examples/bdev/bdevperf/bdevperf.o 00:03:27.397 CC examples/nvme/arbitration/arbitration.o 00:03:27.397 LINK spdk_top 00:03:27.397 CXX test/cpp_headers/dif.o 00:03:27.397 LINK mkfs 00:03:27.397 LINK memory_ut 00:03:27.397 CC test/app/jsoncat/jsoncat.o 00:03:27.397 CXX test/cpp_headers/dma.o 00:03:27.656 CC examples/nvme/hotplug/hotplug.o 00:03:27.656 CXX test/cpp_headers/endian.o 00:03:27.656 LINK arbitration 00:03:27.656 CC app/vhost/vhost.o 00:03:27.656 LINK jsoncat 00:03:27.656 CXX test/cpp_headers/env_dpdk.o 00:03:27.657 LINK nvme_manage 00:03:27.657 CXX test/cpp_headers/env.o 00:03:27.657 CC app/spdk_dd/spdk_dd.o 00:03:27.657 CXX test/cpp_headers/event.o 00:03:27.657 LINK hotplug 00:03:27.657 LINK vhost 00:03:27.915 LINK dif 00:03:27.915 CC test/app/stub/stub.o 00:03:27.915 CC test/event/event_perf/event_perf.o 00:03:27.915 CXX test/cpp_headers/fd_group.o 00:03:27.915 LINK event_perf 00:03:27.915 CC test/lvol/esnap/esnap.o 00:03:27.915 LINK bdevperf 00:03:27.915 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:27.915 LINK stub 00:03:27.915 CXX test/cpp_headers/fd.o 00:03:28.173 CC test/nvme/aer/aer.o 00:03:28.173 LINK spdk_dd 00:03:28.173 LINK cmb_copy 00:03:28.173 CC app/fio/nvme/fio_plugin.o 00:03:28.173 CXX test/cpp_headers/file.o 00:03:28.173 CC test/event/reactor/reactor.o 00:03:28.173 CC test/bdev/bdevio/bdevio.o 00:03:28.173 CC test/nvme/reset/reset.o 00:03:28.173 LINK reactor 00:03:28.173 CXX test/cpp_headers/fsdev.o 00:03:28.173 CC test/event/reactor_perf/reactor_perf.o 00:03:28.173 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:28.173 CC examples/nvme/abort/abort.o 00:03:28.431 LINK aer 00:03:28.431 CC test/event/app_repeat/app_repeat.o 00:03:28.431 CXX test/cpp_headers/fsdev_module.o 00:03:28.431 LINK reactor_perf 00:03:28.431 LINK reset 00:03:28.431 LINK bdevio 00:03:28.431 LINK pmr_persistence 00:03:28.431 LINK app_repeat 00:03:28.431 CXX test/cpp_headers/ftl.o 00:03:28.431 CC test/event/scheduler/scheduler.o 00:03:28.690 CXX test/cpp_headers/gpt_spec.o 00:03:28.690 LINK spdk_nvme 00:03:28.690 CXX test/cpp_headers/hexlify.o 00:03:28.690 CC test/nvme/sgl/sgl.o 00:03:28.690 CC app/fio/bdev/fio_plugin.o 00:03:28.690 LINK abort 00:03:28.690 CXX test/cpp_headers/histogram_data.o 00:03:28.690 CXX test/cpp_headers/idxd.o 00:03:28.690 CXX test/cpp_headers/idxd_spec.o 00:03:28.690 CXX test/cpp_headers/init.o 00:03:28.690 CC test/nvme/e2edp/nvme_dp.o 00:03:28.690 CXX test/cpp_headers/ioat.o 00:03:28.690 CXX test/cpp_headers/ioat_spec.o 00:03:28.690 LINK scheduler 00:03:28.690 CXX test/cpp_headers/iscsi_spec.o 00:03:28.949 LINK sgl 00:03:28.949 CXX test/cpp_headers/json.o 00:03:28.949 CXX test/cpp_headers/jsonrpc.o 00:03:28.949 CXX test/cpp_headers/keyring.o 00:03:28.949 CXX test/cpp_headers/keyring_module.o 00:03:28.949 CC examples/nvmf/nvmf/nvmf.o 00:03:28.949 LINK nvme_dp 00:03:28.949 CC test/nvme/overhead/overhead.o 00:03:28.949 CXX test/cpp_headers/likely.o 00:03:28.949 CXX test/cpp_headers/log.o 00:03:28.949 CC test/nvme/err_injection/err_injection.o 00:03:28.949 CXX test/cpp_headers/lvol.o 00:03:29.207 LINK spdk_bdev 00:03:29.207 CC test/nvme/startup/startup.o 00:03:29.207 CC test/nvme/reserve/reserve.o 00:03:29.207 CXX test/cpp_headers/md5.o 00:03:29.207 CXX test/cpp_headers/memory.o 00:03:29.207 CC test/nvme/simple_copy/simple_copy.o 00:03:29.207 CXX test/cpp_headers/mmio.o 00:03:29.207 LINK err_injection 00:03:29.207 LINK overhead 00:03:29.207 LINK nvmf 00:03:29.207 LINK startup 00:03:29.207 CXX test/cpp_headers/nbd.o 00:03:29.207 CXX test/cpp_headers/net.o 00:03:29.207 LINK reserve 00:03:29.207 CXX test/cpp_headers/notify.o 00:03:29.465 CXX test/cpp_headers/nvme.o 00:03:29.465 LINK simple_copy 00:03:29.465 CXX test/cpp_headers/nvme_intel.o 00:03:29.466 CC test/nvme/connect_stress/connect_stress.o 00:03:29.466 CC test/nvme/boot_partition/boot_partition.o 00:03:29.466 CXX test/cpp_headers/nvme_ocssd.o 00:03:29.466 CC test/nvme/compliance/nvme_compliance.o 00:03:29.466 CC test/nvme/fused_ordering/fused_ordering.o 00:03:29.466 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:29.466 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:29.724 LINK boot_partition 00:03:29.724 CXX test/cpp_headers/nvme_spec.o 00:03:29.724 CC test/nvme/fdp/fdp.o 00:03:29.724 CC test/nvme/cuse/cuse.o 00:03:29.724 LINK connect_stress 00:03:29.724 CXX test/cpp_headers/nvme_zns.o 00:03:29.724 LINK fused_ordering 00:03:29.724 CXX test/cpp_headers/nvmf_cmd.o 00:03:29.724 LINK doorbell_aers 00:03:29.724 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:29.724 CXX test/cpp_headers/nvmf.o 00:03:29.724 CXX test/cpp_headers/nvmf_spec.o 00:03:29.724 LINK nvme_compliance 00:03:29.724 CXX test/cpp_headers/nvmf_transport.o 00:03:29.724 CXX test/cpp_headers/opal.o 00:03:29.724 CXX test/cpp_headers/opal_spec.o 00:03:29.982 LINK fdp 00:03:29.982 CXX test/cpp_headers/pci_ids.o 00:03:29.982 CXX test/cpp_headers/pipe.o 00:03:29.982 CXX test/cpp_headers/queue.o 00:03:29.982 CXX test/cpp_headers/reduce.o 00:03:29.982 CXX test/cpp_headers/rpc.o 00:03:29.982 CXX test/cpp_headers/scheduler.o 00:03:29.982 CXX test/cpp_headers/scsi.o 00:03:29.982 CXX test/cpp_headers/scsi_spec.o 00:03:29.982 CXX test/cpp_headers/sock.o 00:03:29.982 CXX test/cpp_headers/stdinc.o 00:03:29.982 CXX test/cpp_headers/string.o 00:03:29.982 CXX test/cpp_headers/thread.o 00:03:29.982 CXX test/cpp_headers/trace.o 00:03:30.240 CXX test/cpp_headers/trace_parser.o 00:03:30.240 CXX test/cpp_headers/tree.o 00:03:30.240 CXX test/cpp_headers/ublk.o 00:03:30.240 CXX test/cpp_headers/util.o 00:03:30.240 CXX test/cpp_headers/uuid.o 00:03:30.240 CXX test/cpp_headers/version.o 00:03:30.240 CXX test/cpp_headers/vfio_user_pci.o 00:03:30.240 CXX test/cpp_headers/vfio_user_spec.o 00:03:30.240 CXX test/cpp_headers/vhost.o 00:03:30.240 CXX test/cpp_headers/vmd.o 00:03:30.240 CXX test/cpp_headers/xor.o 00:03:30.240 CXX test/cpp_headers/zipf.o 00:03:30.806 LINK cuse 00:03:33.346 LINK esnap 00:03:33.346 00:03:33.346 real 1m3.754s 00:03:33.346 user 6m2.374s 00:03:33.346 sys 1m3.978s 00:03:33.346 ************************************ 00:03:33.346 END TEST make 00:03:33.346 ************************************ 00:03:33.346 12:15:40 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:33.346 12:15:40 make -- common/autotest_common.sh@10 -- $ set +x 00:03:33.346 12:15:40 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:33.346 12:15:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:33.346 12:15:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:33.346 12:15:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.346 12:15:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:33.346 12:15:40 -- pm/common@44 -- $ pid=5084 00:03:33.346 12:15:40 -- pm/common@50 -- $ kill -TERM 5084 00:03:33.346 12:15:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.346 12:15:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:33.346 12:15:40 -- pm/common@44 -- $ pid=5085 00:03:33.346 12:15:40 -- pm/common@50 -- $ kill -TERM 5085 00:03:33.346 12:15:40 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:33.346 12:15:40 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:33.346 12:15:40 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:33.346 12:15:40 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:33.346 12:15:40 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:33.346 12:15:40 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:33.346 12:15:40 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:33.346 12:15:40 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:33.346 12:15:40 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:33.346 12:15:40 -- scripts/common.sh@336 -- # IFS=.-: 00:03:33.346 12:15:40 -- scripts/common.sh@336 -- # read -ra ver1 00:03:33.346 12:15:40 -- scripts/common.sh@337 -- # IFS=.-: 00:03:33.346 12:15:40 -- scripts/common.sh@337 -- # read -ra ver2 00:03:33.346 12:15:40 -- scripts/common.sh@338 -- # local 'op=<' 00:03:33.346 12:15:40 -- scripts/common.sh@340 -- # ver1_l=2 00:03:33.346 12:15:40 -- scripts/common.sh@341 -- # ver2_l=1 00:03:33.346 12:15:40 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:33.346 12:15:40 -- scripts/common.sh@344 -- # case "$op" in 00:03:33.346 12:15:40 -- scripts/common.sh@345 -- # : 1 00:03:33.346 12:15:40 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:33.346 12:15:40 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.346 12:15:40 -- scripts/common.sh@365 -- # decimal 1 00:03:33.346 12:15:40 -- scripts/common.sh@353 -- # local d=1 00:03:33.346 12:15:40 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:33.346 12:15:40 -- scripts/common.sh@355 -- # echo 1 00:03:33.346 12:15:40 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:33.346 12:15:40 -- scripts/common.sh@366 -- # decimal 2 00:03:33.346 12:15:40 -- scripts/common.sh@353 -- # local d=2 00:03:33.346 12:15:40 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:33.346 12:15:40 -- scripts/common.sh@355 -- # echo 2 00:03:33.346 12:15:40 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:33.346 12:15:40 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:33.346 12:15:40 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:33.346 12:15:40 -- scripts/common.sh@368 -- # return 0 00:03:33.346 12:15:40 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:33.346 12:15:40 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:33.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.346 --rc genhtml_branch_coverage=1 00:03:33.346 --rc genhtml_function_coverage=1 00:03:33.346 --rc genhtml_legend=1 00:03:33.346 --rc geninfo_all_blocks=1 00:03:33.346 --rc geninfo_unexecuted_blocks=1 00:03:33.346 00:03:33.346 ' 00:03:33.346 12:15:40 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:33.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.346 --rc genhtml_branch_coverage=1 00:03:33.346 --rc genhtml_function_coverage=1 00:03:33.346 --rc genhtml_legend=1 00:03:33.346 --rc geninfo_all_blocks=1 00:03:33.346 --rc geninfo_unexecuted_blocks=1 00:03:33.346 00:03:33.346 ' 00:03:33.346 12:15:40 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:33.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.346 --rc genhtml_branch_coverage=1 00:03:33.346 --rc genhtml_function_coverage=1 00:03:33.346 --rc genhtml_legend=1 00:03:33.346 --rc geninfo_all_blocks=1 00:03:33.346 --rc geninfo_unexecuted_blocks=1 00:03:33.346 00:03:33.346 ' 00:03:33.346 12:15:40 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:33.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.346 --rc genhtml_branch_coverage=1 00:03:33.346 --rc genhtml_function_coverage=1 00:03:33.346 --rc genhtml_legend=1 00:03:33.346 --rc geninfo_all_blocks=1 00:03:33.346 --rc geninfo_unexecuted_blocks=1 00:03:33.346 00:03:33.346 ' 00:03:33.346 12:15:40 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:33.346 12:15:40 -- nvmf/common.sh@7 -- # uname -s 00:03:33.346 12:15:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:33.346 12:15:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:33.346 12:15:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:33.346 12:15:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:33.346 12:15:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:33.346 12:15:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:33.346 12:15:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:33.346 12:15:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:33.346 12:15:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:33.346 12:15:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:33.346 12:15:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63e07393-7b56-4cb3-b844-5ae779d86e1b 00:03:33.346 12:15:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=63e07393-7b56-4cb3-b844-5ae779d86e1b 00:03:33.346 12:15:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:33.346 12:15:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:33.346 12:15:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:33.346 12:15:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:33.346 12:15:40 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:33.346 12:15:40 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:33.346 12:15:40 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:33.346 12:15:40 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:33.346 12:15:40 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:33.347 12:15:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.347 12:15:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.347 12:15:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.347 12:15:40 -- paths/export.sh@5 -- # export PATH 00:03:33.347 12:15:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.347 12:15:40 -- nvmf/common.sh@51 -- # : 0 00:03:33.347 12:15:40 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:33.347 12:15:40 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:33.347 12:15:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:33.347 12:15:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:33.347 12:15:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:33.347 12:15:40 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:33.347 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:33.347 12:15:40 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:33.347 12:15:40 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:33.347 12:15:40 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:33.347 12:15:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:33.347 12:15:40 -- spdk/autotest.sh@32 -- # uname -s 00:03:33.347 12:15:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:33.347 12:15:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:33.347 12:15:40 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:33.347 12:15:40 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:33.347 12:15:40 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:33.347 12:15:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:33.606 12:15:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:33.606 12:15:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:33.606 12:15:40 -- spdk/autotest.sh@48 -- # udevadm_pid=56033 00:03:33.606 12:15:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:33.606 12:15:40 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:33.606 12:15:40 -- pm/common@17 -- # local monitor 00:03:33.606 12:15:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.606 12:15:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.606 12:15:40 -- pm/common@25 -- # sleep 1 00:03:33.606 12:15:40 -- pm/common@21 -- # date +%s 00:03:33.606 12:15:40 -- pm/common@21 -- # date +%s 00:03:33.606 12:15:40 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734351340 00:03:33.606 12:15:40 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734351340 00:03:33.606 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734351340_collect-vmstat.pm.log 00:03:33.606 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734351340_collect-cpu-load.pm.log 00:03:34.548 12:15:41 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:34.548 12:15:41 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:34.548 12:15:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:34.548 12:15:41 -- common/autotest_common.sh@10 -- # set +x 00:03:34.548 12:15:41 -- spdk/autotest.sh@59 -- # create_test_list 00:03:34.548 12:15:41 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:34.548 12:15:41 -- common/autotest_common.sh@10 -- # set +x 00:03:34.548 12:15:41 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:34.548 12:15:41 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:34.548 12:15:41 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:34.548 12:15:41 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:34.548 12:15:41 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:34.548 12:15:41 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:34.548 12:15:41 -- common/autotest_common.sh@1457 -- # uname 00:03:34.548 12:15:41 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:34.548 12:15:41 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:34.548 12:15:41 -- common/autotest_common.sh@1477 -- # uname 00:03:34.548 12:15:41 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:34.548 12:15:41 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:34.548 12:15:41 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:34.548 lcov: LCOV version 1.15 00:03:34.548 12:15:41 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:49.455 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:49.455 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:04.367 12:16:11 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:04.367 12:16:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.367 12:16:11 -- common/autotest_common.sh@10 -- # set +x 00:04:04.367 12:16:11 -- spdk/autotest.sh@78 -- # rm -f 00:04:04.367 12:16:11 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:04.939 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.511 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:05.511 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:05.511 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:05.511 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:05.511 12:16:12 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:05.511 12:16:12 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:05.511 12:16:12 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:05.511 12:16:12 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:05.511 12:16:12 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:05.511 12:16:12 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:05.511 12:16:12 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:05.511 12:16:12 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:05.511 12:16:12 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:05.511 12:16:12 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:05.511 12:16:12 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:05.511 12:16:12 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:05.511 12:16:12 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:05.511 12:16:12 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:05.511 12:16:12 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:05.511 12:16:12 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:05.511 12:16:12 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:05.511 12:16:12 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:05.511 12:16:12 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:05.511 12:16:12 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:05.511 12:16:12 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:05.511 12:16:12 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:04:05.511 12:16:12 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:05.511 12:16:12 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:04:05.511 12:16:12 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:05.511 12:16:12 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:05.511 12:16:12 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:05.511 12:16:12 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:05.511 12:16:12 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:04:05.511 12:16:12 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:05.511 12:16:12 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:05.511 12:16:12 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:05.511 12:16:12 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:05.512 12:16:12 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:04:05.512 12:16:12 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:05.512 12:16:12 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:05.512 12:16:12 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:05.512 12:16:12 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:05.512 12:16:12 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:04:05.512 12:16:12 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:05.512 12:16:12 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:04:05.512 12:16:12 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:05.512 12:16:12 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:05.512 12:16:12 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:05.512 12:16:12 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:05.512 12:16:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:05.512 12:16:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:05.512 12:16:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:05.512 12:16:12 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:05.512 12:16:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:05.512 No valid GPT data, bailing 00:04:05.512 12:16:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:05.512 12:16:12 -- scripts/common.sh@394 -- # pt= 00:04:05.512 12:16:12 -- scripts/common.sh@395 -- # return 1 00:04:05.512 12:16:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:05.512 1+0 records in 00:04:05.512 1+0 records out 00:04:05.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.033791 s, 31.0 MB/s 00:04:05.512 12:16:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:05.512 12:16:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:05.512 12:16:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:05.512 12:16:12 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:05.512 12:16:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:05.512 No valid GPT data, bailing 00:04:05.512 12:16:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:05.775 12:16:12 -- scripts/common.sh@394 -- # pt= 00:04:05.775 12:16:12 -- scripts/common.sh@395 -- # return 1 00:04:05.775 12:16:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:05.775 1+0 records in 00:04:05.775 1+0 records out 00:04:05.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00542611 s, 193 MB/s 00:04:05.775 12:16:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:05.775 12:16:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:05.775 12:16:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:05.775 12:16:12 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:05.775 12:16:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:05.775 No valid GPT data, bailing 00:04:05.775 12:16:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:05.775 12:16:12 -- scripts/common.sh@394 -- # pt= 00:04:05.775 12:16:12 -- scripts/common.sh@395 -- # return 1 00:04:05.775 12:16:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:05.775 1+0 records in 00:04:05.775 1+0 records out 00:04:05.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465026 s, 225 MB/s 00:04:05.775 12:16:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:05.775 12:16:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:05.775 12:16:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:05.775 12:16:12 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:05.775 12:16:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:05.775 No valid GPT data, bailing 00:04:05.775 12:16:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:05.775 12:16:12 -- scripts/common.sh@394 -- # pt= 00:04:05.775 12:16:12 -- scripts/common.sh@395 -- # return 1 00:04:05.775 12:16:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:05.775 1+0 records in 00:04:05.775 1+0 records out 00:04:05.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00590148 s, 178 MB/s 00:04:05.775 12:16:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:05.775 12:16:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:05.775 12:16:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:05.775 12:16:12 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:05.775 12:16:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:05.775 No valid GPT data, bailing 00:04:05.775 12:16:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:06.036 12:16:12 -- scripts/common.sh@394 -- # pt= 00:04:06.036 12:16:12 -- scripts/common.sh@395 -- # return 1 00:04:06.036 12:16:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:06.036 1+0 records in 00:04:06.036 1+0 records out 00:04:06.036 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00648785 s, 162 MB/s 00:04:06.036 12:16:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:06.036 12:16:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:06.036 12:16:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:06.036 12:16:12 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:06.036 12:16:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:06.036 No valid GPT data, bailing 00:04:06.036 12:16:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:06.036 12:16:12 -- scripts/common.sh@394 -- # pt= 00:04:06.036 12:16:12 -- scripts/common.sh@395 -- # return 1 00:04:06.036 12:16:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:06.036 1+0 records in 00:04:06.036 1+0 records out 00:04:06.036 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00613838 s, 171 MB/s 00:04:06.036 12:16:12 -- spdk/autotest.sh@105 -- # sync 00:04:06.036 12:16:13 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:06.036 12:16:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:06.036 12:16:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:07.971 12:16:14 -- spdk/autotest.sh@111 -- # uname -s 00:04:07.971 12:16:14 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:07.971 12:16:14 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:07.971 12:16:14 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:08.231 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.803 Hugepages 00:04:08.803 node hugesize free / total 00:04:08.803 node0 1048576kB 0 / 0 00:04:08.803 node0 2048kB 0 / 0 00:04:08.803 00:04:08.803 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:08.803 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:08.803 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:08.803 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:08.803 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:09.065 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:09.065 12:16:15 -- spdk/autotest.sh@117 -- # uname -s 00:04:09.065 12:16:15 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:09.065 12:16:15 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:09.065 12:16:15 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:09.325 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.896 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.897 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.157 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.157 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.157 12:16:17 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:11.099 12:16:18 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:11.099 12:16:18 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:11.099 12:16:18 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:11.099 12:16:18 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:11.099 12:16:18 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:11.099 12:16:18 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:11.099 12:16:18 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:11.099 12:16:18 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:11.099 12:16:18 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:11.359 12:16:18 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:11.359 12:16:18 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:11.359 12:16:18 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:11.620 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.620 Waiting for block devices as requested 00:04:11.620 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:11.883 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:11.883 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:11.883 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:17.175 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:17.175 12:16:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:17.175 12:16:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:17.175 12:16:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:17.175 12:16:24 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:17.175 12:16:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:17.175 12:16:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:17.175 12:16:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:17.175 12:16:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:17.175 12:16:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:17.175 12:16:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:17.175 12:16:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:17.175 12:16:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:17.175 12:16:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:17.175 12:16:24 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:17.175 12:16:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:17.175 12:16:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:17.175 12:16:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:17.175 12:16:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:17.175 12:16:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:17.175 12:16:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:17.175 12:16:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:17.175 12:16:24 -- common/autotest_common.sh@1543 -- # continue 00:04:17.175 12:16:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:17.175 12:16:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:17.175 12:16:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:17.175 12:16:24 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:17.175 12:16:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:17.175 12:16:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:17.175 12:16:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:17.175 12:16:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:17.175 12:16:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:17.175 12:16:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:17.175 12:16:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:17.175 12:16:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:17.175 12:16:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:17.175 12:16:24 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:17.175 12:16:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:17.175 12:16:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:17.175 12:16:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:17.175 12:16:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:17.175 12:16:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:17.175 12:16:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:17.175 12:16:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:17.175 12:16:24 -- common/autotest_common.sh@1543 -- # continue 00:04:17.175 12:16:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:17.175 12:16:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:17.175 12:16:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:17.175 12:16:24 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:17.175 12:16:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:17.175 12:16:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:17.175 12:16:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:17.175 12:16:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:17.175 12:16:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:17.175 12:16:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:17.175 12:16:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:17.175 12:16:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:17.175 12:16:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:17.175 12:16:24 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:17.175 12:16:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:17.175 12:16:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:17.175 12:16:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:17.175 12:16:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:17.175 12:16:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:17.175 12:16:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:17.175 12:16:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:17.175 12:16:24 -- common/autotest_common.sh@1543 -- # continue 00:04:17.175 12:16:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:17.175 12:16:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:17.175 12:16:24 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:17.175 12:16:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:17.175 12:16:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:17.175 12:16:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:17.175 12:16:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:17.175 12:16:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:17.175 12:16:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:17.175 12:16:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:17.175 12:16:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:17.175 12:16:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:17.175 12:16:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:17.175 12:16:24 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:17.175 12:16:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:17.175 12:16:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:17.175 12:16:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:17.175 12:16:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:17.175 12:16:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:17.175 12:16:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:17.175 12:16:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:17.175 12:16:24 -- common/autotest_common.sh@1543 -- # continue 00:04:17.175 12:16:24 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:17.175 12:16:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:17.175 12:16:24 -- common/autotest_common.sh@10 -- # set +x 00:04:17.175 12:16:24 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:17.175 12:16:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:17.175 12:16:24 -- common/autotest_common.sh@10 -- # set +x 00:04:17.175 12:16:24 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:17.748 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.318 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.318 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.318 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.318 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.318 12:16:25 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:18.318 12:16:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:18.318 12:16:25 -- common/autotest_common.sh@10 -- # set +x 00:04:18.579 12:16:25 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:18.579 12:16:25 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:18.579 12:16:25 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:18.579 12:16:25 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:18.579 12:16:25 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:18.579 12:16:25 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:18.579 12:16:25 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:18.579 12:16:25 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:18.579 12:16:25 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:18.579 12:16:25 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:18.579 12:16:25 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:18.579 12:16:25 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:18.579 12:16:25 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:18.579 12:16:25 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:18.579 12:16:25 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:18.579 12:16:25 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:18.579 12:16:25 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:18.579 12:16:25 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:18.579 12:16:25 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:18.579 12:16:25 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:18.579 12:16:25 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:18.579 12:16:25 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:18.579 12:16:25 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:18.579 12:16:25 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:18.579 12:16:25 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:18.579 12:16:25 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:18.579 12:16:25 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:18.579 12:16:25 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:18.579 12:16:25 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:18.579 12:16:25 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:18.579 12:16:25 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:18.579 12:16:25 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:18.579 12:16:25 -- common/autotest_common.sh@1572 -- # return 0 00:04:18.579 12:16:25 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:18.579 12:16:25 -- common/autotest_common.sh@1580 -- # return 0 00:04:18.579 12:16:25 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:18.579 12:16:25 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:18.579 12:16:25 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:18.579 12:16:25 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:18.579 12:16:25 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:18.579 12:16:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:18.579 12:16:25 -- common/autotest_common.sh@10 -- # set +x 00:04:18.579 12:16:25 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:18.579 12:16:25 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:18.579 12:16:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.579 12:16:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.579 12:16:25 -- common/autotest_common.sh@10 -- # set +x 00:04:18.579 ************************************ 00:04:18.579 START TEST env 00:04:18.579 ************************************ 00:04:18.579 12:16:25 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:18.579 * Looking for test storage... 00:04:18.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:18.579 12:16:25 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:18.579 12:16:25 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:18.579 12:16:25 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:18.840 12:16:25 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:18.840 12:16:25 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.840 12:16:25 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.840 12:16:25 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.840 12:16:25 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.840 12:16:25 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.840 12:16:25 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.840 12:16:25 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.840 12:16:25 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.840 12:16:25 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.840 12:16:25 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.840 12:16:25 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.840 12:16:25 env -- scripts/common.sh@344 -- # case "$op" in 00:04:18.840 12:16:25 env -- scripts/common.sh@345 -- # : 1 00:04:18.840 12:16:25 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.840 12:16:25 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.840 12:16:25 env -- scripts/common.sh@365 -- # decimal 1 00:04:18.840 12:16:25 env -- scripts/common.sh@353 -- # local d=1 00:04:18.840 12:16:25 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.840 12:16:25 env -- scripts/common.sh@355 -- # echo 1 00:04:18.840 12:16:25 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.840 12:16:25 env -- scripts/common.sh@366 -- # decimal 2 00:04:18.840 12:16:25 env -- scripts/common.sh@353 -- # local d=2 00:04:18.840 12:16:25 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.840 12:16:25 env -- scripts/common.sh@355 -- # echo 2 00:04:18.840 12:16:25 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.840 12:16:25 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.840 12:16:25 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.840 12:16:25 env -- scripts/common.sh@368 -- # return 0 00:04:18.840 12:16:25 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.840 12:16:25 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:18.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.840 --rc genhtml_branch_coverage=1 00:04:18.840 --rc genhtml_function_coverage=1 00:04:18.840 --rc genhtml_legend=1 00:04:18.840 --rc geninfo_all_blocks=1 00:04:18.840 --rc geninfo_unexecuted_blocks=1 00:04:18.840 00:04:18.840 ' 00:04:18.840 12:16:25 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:18.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.840 --rc genhtml_branch_coverage=1 00:04:18.840 --rc genhtml_function_coverage=1 00:04:18.840 --rc genhtml_legend=1 00:04:18.840 --rc geninfo_all_blocks=1 00:04:18.840 --rc geninfo_unexecuted_blocks=1 00:04:18.840 00:04:18.840 ' 00:04:18.840 12:16:25 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:18.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.840 --rc genhtml_branch_coverage=1 00:04:18.840 --rc genhtml_function_coverage=1 00:04:18.840 --rc genhtml_legend=1 00:04:18.840 --rc geninfo_all_blocks=1 00:04:18.840 --rc geninfo_unexecuted_blocks=1 00:04:18.840 00:04:18.840 ' 00:04:18.840 12:16:25 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:18.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.840 --rc genhtml_branch_coverage=1 00:04:18.840 --rc genhtml_function_coverage=1 00:04:18.840 --rc genhtml_legend=1 00:04:18.840 --rc geninfo_all_blocks=1 00:04:18.840 --rc geninfo_unexecuted_blocks=1 00:04:18.840 00:04:18.840 ' 00:04:18.840 12:16:25 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:18.840 12:16:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.840 12:16:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.840 12:16:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.840 ************************************ 00:04:18.840 START TEST env_memory 00:04:18.840 ************************************ 00:04:18.840 12:16:25 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:18.840 00:04:18.840 00:04:18.840 CUnit - A unit testing framework for C - Version 2.1-3 00:04:18.840 http://cunit.sourceforge.net/ 00:04:18.840 00:04:18.840 00:04:18.840 Suite: memory 00:04:18.840 Test: alloc and free memory map ...[2024-12-16 12:16:25.777489] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:18.840 passed 00:04:18.840 Test: mem map translation ...[2024-12-16 12:16:25.817612] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:18.840 [2024-12-16 12:16:25.817662] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:18.840 [2024-12-16 12:16:25.817724] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:18.840 [2024-12-16 12:16:25.817739] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:18.840 passed 00:04:18.840 Test: mem map registration ...[2024-12-16 12:16:25.888000] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:18.840 [2024-12-16 12:16:25.888049] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:18.840 passed 00:04:19.101 Test: mem map adjacent registrations ...passed 00:04:19.101 00:04:19.101 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.101 suites 1 1 n/a 0 0 00:04:19.101 tests 4 4 4 0 0 00:04:19.101 asserts 152 152 152 0 n/a 00:04:19.101 00:04:19.101 Elapsed time = 0.240 seconds 00:04:19.101 ************************************ 00:04:19.101 END TEST env_memory 00:04:19.101 ************************************ 00:04:19.101 00:04:19.101 real 0m0.272s 00:04:19.101 user 0m0.248s 00:04:19.101 sys 0m0.015s 00:04:19.101 12:16:25 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.101 12:16:25 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:19.101 12:16:26 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:19.101 12:16:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.101 12:16:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.101 12:16:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.101 ************************************ 00:04:19.101 START TEST env_vtophys 00:04:19.101 ************************************ 00:04:19.101 12:16:26 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:19.101 EAL: lib.eal log level changed from notice to debug 00:04:19.101 EAL: Detected lcore 0 as core 0 on socket 0 00:04:19.101 EAL: Detected lcore 1 as core 0 on socket 0 00:04:19.101 EAL: Detected lcore 2 as core 0 on socket 0 00:04:19.101 EAL: Detected lcore 3 as core 0 on socket 0 00:04:19.101 EAL: Detected lcore 4 as core 0 on socket 0 00:04:19.101 EAL: Detected lcore 5 as core 0 on socket 0 00:04:19.101 EAL: Detected lcore 6 as core 0 on socket 0 00:04:19.101 EAL: Detected lcore 7 as core 0 on socket 0 00:04:19.101 EAL: Detected lcore 8 as core 0 on socket 0 00:04:19.101 EAL: Detected lcore 9 as core 0 on socket 0 00:04:19.101 EAL: Maximum logical cores by configuration: 128 00:04:19.101 EAL: Detected CPU lcores: 10 00:04:19.101 EAL: Detected NUMA nodes: 1 00:04:19.101 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:19.101 EAL: Detected shared linkage of DPDK 00:04:19.101 EAL: No shared files mode enabled, IPC will be disabled 00:04:19.101 EAL: Selected IOVA mode 'PA' 00:04:19.101 EAL: Probing VFIO support... 00:04:19.101 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:19.101 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:19.101 EAL: Ask a virtual area of 0x2e000 bytes 00:04:19.101 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:19.101 EAL: Setting up physically contiguous memory... 00:04:19.101 EAL: Setting maximum number of open files to 524288 00:04:19.101 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:19.101 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:19.101 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.101 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:19.101 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.101 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.101 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:19.101 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:19.101 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.101 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:19.101 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.101 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.101 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:19.101 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:19.101 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.101 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:19.101 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.101 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.101 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:19.101 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:19.101 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.101 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:19.101 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.101 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.101 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:19.101 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:19.101 EAL: Hugepages will be freed exactly as allocated. 00:04:19.101 EAL: No shared files mode enabled, IPC is disabled 00:04:19.101 EAL: No shared files mode enabled, IPC is disabled 00:04:19.362 EAL: TSC frequency is ~2600000 KHz 00:04:19.362 EAL: Main lcore 0 is ready (tid=7f2c628cea40;cpuset=[0]) 00:04:19.362 EAL: Trying to obtain current memory policy. 00:04:19.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.362 EAL: Restoring previous memory policy: 0 00:04:19.362 EAL: request: mp_malloc_sync 00:04:19.362 EAL: No shared files mode enabled, IPC is disabled 00:04:19.362 EAL: Heap on socket 0 was expanded by 2MB 00:04:19.362 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:19.362 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:19.362 EAL: Mem event callback 'spdk:(nil)' registered 00:04:19.362 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:19.362 00:04:19.362 00:04:19.362 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.362 http://cunit.sourceforge.net/ 00:04:19.362 00:04:19.362 00:04:19.363 Suite: components_suite 00:04:19.623 Test: vtophys_malloc_test ...passed 00:04:19.623 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:19.623 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.623 EAL: Restoring previous memory policy: 4 00:04:19.623 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.623 EAL: request: mp_malloc_sync 00:04:19.623 EAL: No shared files mode enabled, IPC is disabled 00:04:19.623 EAL: Heap on socket 0 was expanded by 4MB 00:04:19.623 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.623 EAL: request: mp_malloc_sync 00:04:19.623 EAL: No shared files mode enabled, IPC is disabled 00:04:19.623 EAL: Heap on socket 0 was shrunk by 4MB 00:04:19.623 EAL: Trying to obtain current memory policy. 00:04:19.623 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.623 EAL: Restoring previous memory policy: 4 00:04:19.623 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.623 EAL: request: mp_malloc_sync 00:04:19.623 EAL: No shared files mode enabled, IPC is disabled 00:04:19.623 EAL: Heap on socket 0 was expanded by 6MB 00:04:19.623 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.623 EAL: request: mp_malloc_sync 00:04:19.623 EAL: No shared files mode enabled, IPC is disabled 00:04:19.623 EAL: Heap on socket 0 was shrunk by 6MB 00:04:19.623 EAL: Trying to obtain current memory policy. 00:04:19.623 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.623 EAL: Restoring previous memory policy: 4 00:04:19.623 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.623 EAL: request: mp_malloc_sync 00:04:19.623 EAL: No shared files mode enabled, IPC is disabled 00:04:19.623 EAL: Heap on socket 0 was expanded by 10MB 00:04:19.623 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.623 EAL: request: mp_malloc_sync 00:04:19.624 EAL: No shared files mode enabled, IPC is disabled 00:04:19.624 EAL: Heap on socket 0 was shrunk by 10MB 00:04:19.624 EAL: Trying to obtain current memory policy. 00:04:19.624 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.624 EAL: Restoring previous memory policy: 4 00:04:19.624 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.624 EAL: request: mp_malloc_sync 00:04:19.624 EAL: No shared files mode enabled, IPC is disabled 00:04:19.624 EAL: Heap on socket 0 was expanded by 18MB 00:04:19.624 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.624 EAL: request: mp_malloc_sync 00:04:19.624 EAL: No shared files mode enabled, IPC is disabled 00:04:19.624 EAL: Heap on socket 0 was shrunk by 18MB 00:04:19.624 EAL: Trying to obtain current memory policy. 00:04:19.624 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.624 EAL: Restoring previous memory policy: 4 00:04:19.624 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.624 EAL: request: mp_malloc_sync 00:04:19.624 EAL: No shared files mode enabled, IPC is disabled 00:04:19.624 EAL: Heap on socket 0 was expanded by 34MB 00:04:19.884 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.884 EAL: request: mp_malloc_sync 00:04:19.884 EAL: No shared files mode enabled, IPC is disabled 00:04:19.884 EAL: Heap on socket 0 was shrunk by 34MB 00:04:19.884 EAL: Trying to obtain current memory policy. 00:04:19.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.884 EAL: Restoring previous memory policy: 4 00:04:19.884 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.884 EAL: request: mp_malloc_sync 00:04:19.884 EAL: No shared files mode enabled, IPC is disabled 00:04:19.884 EAL: Heap on socket 0 was expanded by 66MB 00:04:19.884 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.884 EAL: request: mp_malloc_sync 00:04:19.884 EAL: No shared files mode enabled, IPC is disabled 00:04:19.884 EAL: Heap on socket 0 was shrunk by 66MB 00:04:19.884 EAL: Trying to obtain current memory policy. 00:04:19.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.884 EAL: Restoring previous memory policy: 4 00:04:19.884 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.884 EAL: request: mp_malloc_sync 00:04:19.884 EAL: No shared files mode enabled, IPC is disabled 00:04:19.884 EAL: Heap on socket 0 was expanded by 130MB 00:04:20.145 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.145 EAL: request: mp_malloc_sync 00:04:20.145 EAL: No shared files mode enabled, IPC is disabled 00:04:20.145 EAL: Heap on socket 0 was shrunk by 130MB 00:04:20.407 EAL: Trying to obtain current memory policy. 00:04:20.407 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.407 EAL: Restoring previous memory policy: 4 00:04:20.407 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.407 EAL: request: mp_malloc_sync 00:04:20.407 EAL: No shared files mode enabled, IPC is disabled 00:04:20.407 EAL: Heap on socket 0 was expanded by 258MB 00:04:20.668 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.668 EAL: request: mp_malloc_sync 00:04:20.668 EAL: No shared files mode enabled, IPC is disabled 00:04:20.668 EAL: Heap on socket 0 was shrunk by 258MB 00:04:20.929 EAL: Trying to obtain current memory policy. 00:04:20.929 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.200 EAL: Restoring previous memory policy: 4 00:04:21.200 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.200 EAL: request: mp_malloc_sync 00:04:21.200 EAL: No shared files mode enabled, IPC is disabled 00:04:21.200 EAL: Heap on socket 0 was expanded by 514MB 00:04:21.810 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.810 EAL: request: mp_malloc_sync 00:04:21.810 EAL: No shared files mode enabled, IPC is disabled 00:04:21.810 EAL: Heap on socket 0 was shrunk by 514MB 00:04:22.382 EAL: Trying to obtain current memory policy. 00:04:22.382 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.382 EAL: Restoring previous memory policy: 4 00:04:22.382 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.382 EAL: request: mp_malloc_sync 00:04:22.382 EAL: No shared files mode enabled, IPC is disabled 00:04:22.382 EAL: Heap on socket 0 was expanded by 1026MB 00:04:23.767 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.767 EAL: request: mp_malloc_sync 00:04:23.767 EAL: No shared files mode enabled, IPC is disabled 00:04:23.767 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:24.704 passed 00:04:24.704 00:04:24.704 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.704 suites 1 1 n/a 0 0 00:04:24.704 tests 2 2 2 0 0 00:04:24.704 asserts 5782 5782 5782 0 n/a 00:04:24.704 00:04:24.704 Elapsed time = 5.158 seconds 00:04:24.704 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.704 EAL: request: mp_malloc_sync 00:04:24.704 EAL: No shared files mode enabled, IPC is disabled 00:04:24.704 EAL: Heap on socket 0 was shrunk by 2MB 00:04:24.704 EAL: No shared files mode enabled, IPC is disabled 00:04:24.704 EAL: No shared files mode enabled, IPC is disabled 00:04:24.704 EAL: No shared files mode enabled, IPC is disabled 00:04:24.704 00:04:24.704 real 0m5.433s 00:04:24.704 user 0m4.397s 00:04:24.704 sys 0m0.883s 00:04:24.704 12:16:31 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.704 12:16:31 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:24.704 ************************************ 00:04:24.704 END TEST env_vtophys 00:04:24.704 ************************************ 00:04:24.704 12:16:31 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:24.704 12:16:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.704 12:16:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.704 12:16:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.704 ************************************ 00:04:24.704 START TEST env_pci 00:04:24.704 ************************************ 00:04:24.704 12:16:31 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:24.704 00:04:24.704 00:04:24.704 CUnit - A unit testing framework for C - Version 2.1-3 00:04:24.704 http://cunit.sourceforge.net/ 00:04:24.704 00:04:24.704 00:04:24.704 Suite: pci 00:04:24.705 Test: pci_hook ...[2024-12-16 12:16:31.574558] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58815 has claimed it 00:04:24.705 passed 00:04:24.705 00:04:24.705 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.705 suites 1 1 n/a 0 0 00:04:24.705 tests 1 1 1 0 0 00:04:24.705 asserts 25 25 25 0 n/a 00:04:24.705 00:04:24.705 Elapsed time = 0.005 seconds 00:04:24.705 EAL: Cannot find device (10000:00:01.0) 00:04:24.705 EAL: Failed to attach device on primary process 00:04:24.705 00:04:24.705 real 0m0.059s 00:04:24.705 user 0m0.026s 00:04:24.705 sys 0m0.032s 00:04:24.705 12:16:31 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.705 12:16:31 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:24.705 ************************************ 00:04:24.705 END TEST env_pci 00:04:24.705 ************************************ 00:04:24.705 12:16:31 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:24.705 12:16:31 env -- env/env.sh@15 -- # uname 00:04:24.705 12:16:31 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:24.705 12:16:31 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:24.705 12:16:31 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:24.705 12:16:31 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:24.705 12:16:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.705 12:16:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.705 ************************************ 00:04:24.705 START TEST env_dpdk_post_init 00:04:24.705 ************************************ 00:04:24.705 12:16:31 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:24.705 EAL: Detected CPU lcores: 10 00:04:24.705 EAL: Detected NUMA nodes: 1 00:04:24.705 EAL: Detected shared linkage of DPDK 00:04:24.705 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:24.705 EAL: Selected IOVA mode 'PA' 00:04:24.967 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:24.967 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:24.967 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:24.967 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:24.967 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:24.967 Starting DPDK initialization... 00:04:24.967 Starting SPDK post initialization... 00:04:24.967 SPDK NVMe probe 00:04:24.967 Attaching to 0000:00:10.0 00:04:24.967 Attaching to 0000:00:11.0 00:04:24.967 Attaching to 0000:00:12.0 00:04:24.967 Attaching to 0000:00:13.0 00:04:24.967 Attached to 0000:00:13.0 00:04:24.967 Attached to 0000:00:10.0 00:04:24.967 Attached to 0000:00:11.0 00:04:24.967 Attached to 0000:00:12.0 00:04:24.967 Cleaning up... 00:04:24.967 00:04:24.967 real 0m0.258s 00:04:24.967 user 0m0.087s 00:04:24.967 sys 0m0.072s 00:04:24.967 ************************************ 00:04:24.967 END TEST env_dpdk_post_init 00:04:24.967 ************************************ 00:04:24.967 12:16:31 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.967 12:16:31 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:24.967 12:16:31 env -- env/env.sh@26 -- # uname 00:04:24.967 12:16:31 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:24.967 12:16:31 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:24.967 12:16:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.967 12:16:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.967 12:16:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.967 ************************************ 00:04:24.967 START TEST env_mem_callbacks 00:04:24.967 ************************************ 00:04:24.967 12:16:31 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:24.967 EAL: Detected CPU lcores: 10 00:04:24.967 EAL: Detected NUMA nodes: 1 00:04:24.967 EAL: Detected shared linkage of DPDK 00:04:24.967 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:24.967 EAL: Selected IOVA mode 'PA' 00:04:25.230 00:04:25.230 00:04:25.230 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.230 http://cunit.sourceforge.net/ 00:04:25.230 00:04:25.230 00:04:25.230 Suite: memory 00:04:25.230 Test: test ...TELEMETRY: No legacy callbacks, legacy socket not created 00:04:25.230 00:04:25.230 register 0x200000200000 2097152 00:04:25.230 malloc 3145728 00:04:25.230 register 0x200000400000 4194304 00:04:25.230 buf 0x2000004fffc0 len 3145728 PASSED 00:04:25.230 malloc 64 00:04:25.230 buf 0x2000004ffec0 len 64 PASSED 00:04:25.230 malloc 4194304 00:04:25.230 register 0x200000800000 6291456 00:04:25.230 buf 0x2000009fffc0 len 4194304 PASSED 00:04:25.230 free 0x2000004fffc0 3145728 00:04:25.230 free 0x2000004ffec0 64 00:04:25.230 unregister 0x200000400000 4194304 PASSED 00:04:25.230 free 0x2000009fffc0 4194304 00:04:25.230 unregister 0x200000800000 6291456 PASSED 00:04:25.230 malloc 8388608 00:04:25.230 register 0x200000400000 10485760 00:04:25.230 buf 0x2000005fffc0 len 8388608 PASSED 00:04:25.230 free 0x2000005fffc0 8388608 00:04:25.230 unregister 0x200000400000 10485760 PASSED 00:04:25.230 passed 00:04:25.230 00:04:25.230 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.230 suites 1 1 n/a 0 0 00:04:25.230 tests 1 1 1 0 0 00:04:25.230 asserts 15 15 15 0 n/a 00:04:25.230 00:04:25.230 Elapsed time = 0.048 seconds 00:04:25.230 00:04:25.230 real 0m0.226s 00:04:25.230 user 0m0.067s 00:04:25.230 sys 0m0.057s 00:04:25.230 12:16:32 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.230 ************************************ 00:04:25.230 END TEST env_mem_callbacks 00:04:25.230 ************************************ 00:04:25.230 12:16:32 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:25.230 ************************************ 00:04:25.230 END TEST env 00:04:25.230 ************************************ 00:04:25.230 00:04:25.230 real 0m6.720s 00:04:25.230 user 0m4.974s 00:04:25.230 sys 0m1.287s 00:04:25.230 12:16:32 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.230 12:16:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.230 12:16:32 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:25.230 12:16:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.230 12:16:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.230 12:16:32 -- common/autotest_common.sh@10 -- # set +x 00:04:25.491 ************************************ 00:04:25.491 START TEST rpc 00:04:25.491 ************************************ 00:04:25.491 12:16:32 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:25.491 * Looking for test storage... 00:04:25.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:25.491 12:16:32 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:25.491 12:16:32 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:25.491 12:16:32 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:25.491 12:16:32 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:25.491 12:16:32 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.491 12:16:32 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.491 12:16:32 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.491 12:16:32 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.491 12:16:32 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.491 12:16:32 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.491 12:16:32 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.491 12:16:32 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.491 12:16:32 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.491 12:16:32 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.491 12:16:32 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.491 12:16:32 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:25.491 12:16:32 rpc -- scripts/common.sh@345 -- # : 1 00:04:25.491 12:16:32 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.491 12:16:32 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.491 12:16:32 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:25.491 12:16:32 rpc -- scripts/common.sh@353 -- # local d=1 00:04:25.491 12:16:32 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.491 12:16:32 rpc -- scripts/common.sh@355 -- # echo 1 00:04:25.491 12:16:32 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.491 12:16:32 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:25.491 12:16:32 rpc -- scripts/common.sh@353 -- # local d=2 00:04:25.491 12:16:32 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.491 12:16:32 rpc -- scripts/common.sh@355 -- # echo 2 00:04:25.491 12:16:32 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.491 12:16:32 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.491 12:16:32 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.491 12:16:32 rpc -- scripts/common.sh@368 -- # return 0 00:04:25.491 12:16:32 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.491 12:16:32 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:25.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.491 --rc genhtml_branch_coverage=1 00:04:25.491 --rc genhtml_function_coverage=1 00:04:25.491 --rc genhtml_legend=1 00:04:25.491 --rc geninfo_all_blocks=1 00:04:25.491 --rc geninfo_unexecuted_blocks=1 00:04:25.491 00:04:25.491 ' 00:04:25.491 12:16:32 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:25.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.491 --rc genhtml_branch_coverage=1 00:04:25.491 --rc genhtml_function_coverage=1 00:04:25.491 --rc genhtml_legend=1 00:04:25.491 --rc geninfo_all_blocks=1 00:04:25.491 --rc geninfo_unexecuted_blocks=1 00:04:25.491 00:04:25.491 ' 00:04:25.491 12:16:32 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:25.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.491 --rc genhtml_branch_coverage=1 00:04:25.491 --rc genhtml_function_coverage=1 00:04:25.491 --rc genhtml_legend=1 00:04:25.491 --rc geninfo_all_blocks=1 00:04:25.491 --rc geninfo_unexecuted_blocks=1 00:04:25.491 00:04:25.491 ' 00:04:25.491 12:16:32 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:25.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.491 --rc genhtml_branch_coverage=1 00:04:25.491 --rc genhtml_function_coverage=1 00:04:25.491 --rc genhtml_legend=1 00:04:25.491 --rc geninfo_all_blocks=1 00:04:25.491 --rc geninfo_unexecuted_blocks=1 00:04:25.491 00:04:25.491 ' 00:04:25.491 12:16:32 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58942 00:04:25.491 12:16:32 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.491 12:16:32 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58942 00:04:25.491 12:16:32 rpc -- common/autotest_common.sh@835 -- # '[' -z 58942 ']' 00:04:25.491 12:16:32 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.491 12:16:32 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.491 12:16:32 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.491 12:16:32 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.491 12:16:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.491 12:16:32 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:25.491 [2024-12-16 12:16:32.582951] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:25.491 [2024-12-16 12:16:32.583099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58942 ] 00:04:25.752 [2024-12-16 12:16:32.749715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.014 [2024-12-16 12:16:32.872153] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:26.014 [2024-12-16 12:16:32.872235] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58942' to capture a snapshot of events at runtime. 00:04:26.014 [2024-12-16 12:16:32.872247] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:26.014 [2024-12-16 12:16:32.872259] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:26.014 [2024-12-16 12:16:32.872267] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58942 for offline analysis/debug. 00:04:26.014 [2024-12-16 12:16:32.873202] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.587 12:16:33 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:26.587 12:16:33 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:26.587 12:16:33 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:26.587 12:16:33 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:26.587 12:16:33 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:26.587 12:16:33 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:26.587 12:16:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.587 12:16:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.587 12:16:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.587 ************************************ 00:04:26.587 START TEST rpc_integrity 00:04:26.587 ************************************ 00:04:26.587 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:26.587 12:16:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:26.587 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.587 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.587 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.587 12:16:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:26.587 12:16:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:26.587 12:16:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:26.587 12:16:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:26.587 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.587 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.587 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.587 12:16:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:26.587 12:16:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:26.587 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.587 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.587 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.587 12:16:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:26.587 { 00:04:26.587 "name": "Malloc0", 00:04:26.587 "aliases": [ 00:04:26.587 "5fb660d7-4e4d-4ee7-8024-9adfa3aac4be" 00:04:26.587 ], 00:04:26.587 "product_name": "Malloc disk", 00:04:26.587 "block_size": 512, 00:04:26.587 "num_blocks": 16384, 00:04:26.587 "uuid": "5fb660d7-4e4d-4ee7-8024-9adfa3aac4be", 00:04:26.587 "assigned_rate_limits": { 00:04:26.587 "rw_ios_per_sec": 0, 00:04:26.587 "rw_mbytes_per_sec": 0, 00:04:26.587 "r_mbytes_per_sec": 0, 00:04:26.587 "w_mbytes_per_sec": 0 00:04:26.587 }, 00:04:26.587 "claimed": false, 00:04:26.587 "zoned": false, 00:04:26.587 "supported_io_types": { 00:04:26.587 "read": true, 00:04:26.587 "write": true, 00:04:26.587 "unmap": true, 00:04:26.587 "flush": true, 00:04:26.587 "reset": true, 00:04:26.587 "nvme_admin": false, 00:04:26.587 "nvme_io": false, 00:04:26.587 "nvme_io_md": false, 00:04:26.587 "write_zeroes": true, 00:04:26.587 "zcopy": true, 00:04:26.587 "get_zone_info": false, 00:04:26.587 "zone_management": false, 00:04:26.587 "zone_append": false, 00:04:26.587 "compare": false, 00:04:26.587 "compare_and_write": false, 00:04:26.587 "abort": true, 00:04:26.587 "seek_hole": false, 00:04:26.587 "seek_data": false, 00:04:26.587 "copy": true, 00:04:26.587 "nvme_iov_md": false 00:04:26.587 }, 00:04:26.587 "memory_domains": [ 00:04:26.587 { 00:04:26.587 "dma_device_id": "system", 00:04:26.587 "dma_device_type": 1 00:04:26.587 }, 00:04:26.587 { 00:04:26.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.587 "dma_device_type": 2 00:04:26.587 } 00:04:26.587 ], 00:04:26.587 "driver_specific": {} 00:04:26.587 } 00:04:26.587 ]' 00:04:26.587 12:16:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:26.587 12:16:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:26.587 12:16:33 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:26.587 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.587 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.587 [2024-12-16 12:16:33.689409] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:26.587 [2024-12-16 12:16:33.689489] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:26.587 [2024-12-16 12:16:33.689516] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:26.587 [2024-12-16 12:16:33.689528] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:26.848 [2024-12-16 12:16:33.692057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:26.848 [2024-12-16 12:16:33.692117] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:26.848 Passthru0 00:04:26.848 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.848 12:16:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:26.848 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.848 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.848 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.848 12:16:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:26.848 { 00:04:26.848 "name": "Malloc0", 00:04:26.848 "aliases": [ 00:04:26.848 "5fb660d7-4e4d-4ee7-8024-9adfa3aac4be" 00:04:26.848 ], 00:04:26.848 "product_name": "Malloc disk", 00:04:26.848 "block_size": 512, 00:04:26.848 "num_blocks": 16384, 00:04:26.848 "uuid": "5fb660d7-4e4d-4ee7-8024-9adfa3aac4be", 00:04:26.848 "assigned_rate_limits": { 00:04:26.849 "rw_ios_per_sec": 0, 00:04:26.849 "rw_mbytes_per_sec": 0, 00:04:26.849 "r_mbytes_per_sec": 0, 00:04:26.849 "w_mbytes_per_sec": 0 00:04:26.849 }, 00:04:26.849 "claimed": true, 00:04:26.849 "claim_type": "exclusive_write", 00:04:26.849 "zoned": false, 00:04:26.849 "supported_io_types": { 00:04:26.849 "read": true, 00:04:26.849 "write": true, 00:04:26.849 "unmap": true, 00:04:26.849 "flush": true, 00:04:26.849 "reset": true, 00:04:26.849 "nvme_admin": false, 00:04:26.849 "nvme_io": false, 00:04:26.849 "nvme_io_md": false, 00:04:26.849 "write_zeroes": true, 00:04:26.849 "zcopy": true, 00:04:26.849 "get_zone_info": false, 00:04:26.849 "zone_management": false, 00:04:26.849 "zone_append": false, 00:04:26.849 "compare": false, 00:04:26.849 "compare_and_write": false, 00:04:26.849 "abort": true, 00:04:26.849 "seek_hole": false, 00:04:26.849 "seek_data": false, 00:04:26.849 "copy": true, 00:04:26.849 "nvme_iov_md": false 00:04:26.849 }, 00:04:26.849 "memory_domains": [ 00:04:26.849 { 00:04:26.849 "dma_device_id": "system", 00:04:26.849 "dma_device_type": 1 00:04:26.849 }, 00:04:26.849 { 00:04:26.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.849 "dma_device_type": 2 00:04:26.849 } 00:04:26.849 ], 00:04:26.849 "driver_specific": {} 00:04:26.849 }, 00:04:26.849 { 00:04:26.849 "name": "Passthru0", 00:04:26.849 "aliases": [ 00:04:26.849 "376d2a23-7b2f-5ef4-8d6a-b6fb97deaa75" 00:04:26.849 ], 00:04:26.849 "product_name": "passthru", 00:04:26.849 "block_size": 512, 00:04:26.849 "num_blocks": 16384, 00:04:26.849 "uuid": "376d2a23-7b2f-5ef4-8d6a-b6fb97deaa75", 00:04:26.849 "assigned_rate_limits": { 00:04:26.849 "rw_ios_per_sec": 0, 00:04:26.849 "rw_mbytes_per_sec": 0, 00:04:26.849 "r_mbytes_per_sec": 0, 00:04:26.849 "w_mbytes_per_sec": 0 00:04:26.849 }, 00:04:26.849 "claimed": false, 00:04:26.849 "zoned": false, 00:04:26.849 "supported_io_types": { 00:04:26.849 "read": true, 00:04:26.849 "write": true, 00:04:26.849 "unmap": true, 00:04:26.849 "flush": true, 00:04:26.849 "reset": true, 00:04:26.849 "nvme_admin": false, 00:04:26.849 "nvme_io": false, 00:04:26.849 "nvme_io_md": false, 00:04:26.849 "write_zeroes": true, 00:04:26.849 "zcopy": true, 00:04:26.849 "get_zone_info": false, 00:04:26.849 "zone_management": false, 00:04:26.849 "zone_append": false, 00:04:26.849 "compare": false, 00:04:26.849 "compare_and_write": false, 00:04:26.849 "abort": true, 00:04:26.849 "seek_hole": false, 00:04:26.849 "seek_data": false, 00:04:26.849 "copy": true, 00:04:26.849 "nvme_iov_md": false 00:04:26.849 }, 00:04:26.849 "memory_domains": [ 00:04:26.849 { 00:04:26.849 "dma_device_id": "system", 00:04:26.849 "dma_device_type": 1 00:04:26.849 }, 00:04:26.849 { 00:04:26.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.849 "dma_device_type": 2 00:04:26.849 } 00:04:26.849 ], 00:04:26.849 "driver_specific": { 00:04:26.849 "passthru": { 00:04:26.849 "name": "Passthru0", 00:04:26.849 "base_bdev_name": "Malloc0" 00:04:26.849 } 00:04:26.849 } 00:04:26.849 } 00:04:26.849 ]' 00:04:26.849 12:16:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:26.849 12:16:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:26.849 12:16:33 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:26.849 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.849 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.849 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.849 12:16:33 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:26.849 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.849 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.849 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.849 12:16:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:26.849 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.849 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.849 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.849 12:16:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:26.849 12:16:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:26.849 12:16:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:26.849 00:04:26.849 real 0m0.239s 00:04:26.849 user 0m0.124s 00:04:26.849 sys 0m0.030s 00:04:26.849 ************************************ 00:04:26.849 END TEST rpc_integrity 00:04:26.849 ************************************ 00:04:26.849 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.849 12:16:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.849 12:16:33 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:26.849 12:16:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.849 12:16:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.849 12:16:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.849 ************************************ 00:04:26.849 START TEST rpc_plugins 00:04:26.849 ************************************ 00:04:26.849 12:16:33 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:26.849 12:16:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:26.849 12:16:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.849 12:16:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.849 12:16:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.849 12:16:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:26.849 12:16:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:26.849 12:16:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.849 12:16:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.849 12:16:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.849 12:16:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:26.849 { 00:04:26.849 "name": "Malloc1", 00:04:26.849 "aliases": [ 00:04:26.849 "59edfbcd-4f7e-48d9-b201-81fe87d78388" 00:04:26.849 ], 00:04:26.849 "product_name": "Malloc disk", 00:04:26.849 "block_size": 4096, 00:04:26.849 "num_blocks": 256, 00:04:26.849 "uuid": "59edfbcd-4f7e-48d9-b201-81fe87d78388", 00:04:26.849 "assigned_rate_limits": { 00:04:26.849 "rw_ios_per_sec": 0, 00:04:26.849 "rw_mbytes_per_sec": 0, 00:04:26.849 "r_mbytes_per_sec": 0, 00:04:26.849 "w_mbytes_per_sec": 0 00:04:26.849 }, 00:04:26.849 "claimed": false, 00:04:26.849 "zoned": false, 00:04:26.849 "supported_io_types": { 00:04:26.849 "read": true, 00:04:26.849 "write": true, 00:04:26.849 "unmap": true, 00:04:26.849 "flush": true, 00:04:26.849 "reset": true, 00:04:26.849 "nvme_admin": false, 00:04:26.849 "nvme_io": false, 00:04:26.849 "nvme_io_md": false, 00:04:26.849 "write_zeroes": true, 00:04:26.849 "zcopy": true, 00:04:26.849 "get_zone_info": false, 00:04:26.849 "zone_management": false, 00:04:26.849 "zone_append": false, 00:04:26.849 "compare": false, 00:04:26.849 "compare_and_write": false, 00:04:26.849 "abort": true, 00:04:26.849 "seek_hole": false, 00:04:26.849 "seek_data": false, 00:04:26.849 "copy": true, 00:04:26.849 "nvme_iov_md": false 00:04:26.849 }, 00:04:26.849 "memory_domains": [ 00:04:26.849 { 00:04:26.849 "dma_device_id": "system", 00:04:26.849 "dma_device_type": 1 00:04:26.849 }, 00:04:26.849 { 00:04:26.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.849 "dma_device_type": 2 00:04:26.849 } 00:04:26.849 ], 00:04:26.849 "driver_specific": {} 00:04:26.849 } 00:04:26.849 ]' 00:04:26.849 12:16:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:26.849 12:16:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:26.849 12:16:33 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:26.849 12:16:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.849 12:16:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.849 12:16:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.111 12:16:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:27.111 12:16:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.111 12:16:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.111 12:16:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.111 12:16:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:27.111 12:16:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:27.111 ************************************ 00:04:27.111 END TEST rpc_plugins 00:04:27.111 ************************************ 00:04:27.111 12:16:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:27.111 00:04:27.111 real 0m0.117s 00:04:27.111 user 0m0.065s 00:04:27.111 sys 0m0.018s 00:04:27.111 12:16:33 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.111 12:16:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.111 12:16:34 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:27.111 12:16:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.111 12:16:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.111 12:16:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.111 ************************************ 00:04:27.111 START TEST rpc_trace_cmd_test 00:04:27.111 ************************************ 00:04:27.111 12:16:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:27.111 12:16:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:27.111 12:16:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:27.111 12:16:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.111 12:16:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:27.111 12:16:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.111 12:16:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:27.111 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58942", 00:04:27.111 "tpoint_group_mask": "0x8", 00:04:27.111 "iscsi_conn": { 00:04:27.111 "mask": "0x2", 00:04:27.111 "tpoint_mask": "0x0" 00:04:27.111 }, 00:04:27.111 "scsi": { 00:04:27.111 "mask": "0x4", 00:04:27.111 "tpoint_mask": "0x0" 00:04:27.111 }, 00:04:27.111 "bdev": { 00:04:27.111 "mask": "0x8", 00:04:27.111 "tpoint_mask": "0xffffffffffffffff" 00:04:27.111 }, 00:04:27.111 "nvmf_rdma": { 00:04:27.111 "mask": "0x10", 00:04:27.111 "tpoint_mask": "0x0" 00:04:27.111 }, 00:04:27.111 "nvmf_tcp": { 00:04:27.111 "mask": "0x20", 00:04:27.111 "tpoint_mask": "0x0" 00:04:27.111 }, 00:04:27.111 "ftl": { 00:04:27.111 "mask": "0x40", 00:04:27.111 "tpoint_mask": "0x0" 00:04:27.111 }, 00:04:27.111 "blobfs": { 00:04:27.111 "mask": "0x80", 00:04:27.111 "tpoint_mask": "0x0" 00:04:27.111 }, 00:04:27.111 "dsa": { 00:04:27.111 "mask": "0x200", 00:04:27.111 "tpoint_mask": "0x0" 00:04:27.111 }, 00:04:27.111 "thread": { 00:04:27.111 "mask": "0x400", 00:04:27.111 "tpoint_mask": "0x0" 00:04:27.111 }, 00:04:27.111 "nvme_pcie": { 00:04:27.111 "mask": "0x800", 00:04:27.111 "tpoint_mask": "0x0" 00:04:27.111 }, 00:04:27.111 "iaa": { 00:04:27.111 "mask": "0x1000", 00:04:27.111 "tpoint_mask": "0x0" 00:04:27.111 }, 00:04:27.111 "nvme_tcp": { 00:04:27.111 "mask": "0x2000", 00:04:27.111 "tpoint_mask": "0x0" 00:04:27.111 }, 00:04:27.111 "bdev_nvme": { 00:04:27.111 "mask": "0x4000", 00:04:27.111 "tpoint_mask": "0x0" 00:04:27.111 }, 00:04:27.111 "sock": { 00:04:27.111 "mask": "0x8000", 00:04:27.111 "tpoint_mask": "0x0" 00:04:27.111 }, 00:04:27.111 "blob": { 00:04:27.111 "mask": "0x10000", 00:04:27.111 "tpoint_mask": "0x0" 00:04:27.111 }, 00:04:27.111 "bdev_raid": { 00:04:27.111 "mask": "0x20000", 00:04:27.111 "tpoint_mask": "0x0" 00:04:27.111 }, 00:04:27.111 "scheduler": { 00:04:27.111 "mask": "0x40000", 00:04:27.111 "tpoint_mask": "0x0" 00:04:27.111 } 00:04:27.111 }' 00:04:27.111 12:16:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:27.111 12:16:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:27.111 12:16:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:27.111 12:16:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:27.111 12:16:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:27.111 12:16:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:27.111 12:16:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:27.111 12:16:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:27.111 12:16:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:27.373 ************************************ 00:04:27.373 END TEST rpc_trace_cmd_test 00:04:27.373 ************************************ 00:04:27.373 12:16:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:27.373 00:04:27.373 real 0m0.163s 00:04:27.373 user 0m0.132s 00:04:27.373 sys 0m0.021s 00:04:27.373 12:16:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.373 12:16:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:27.373 12:16:34 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:27.373 12:16:34 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:27.373 12:16:34 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:27.373 12:16:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.373 12:16:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.373 12:16:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.373 ************************************ 00:04:27.373 START TEST rpc_daemon_integrity 00:04:27.373 ************************************ 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:27.373 { 00:04:27.373 "name": "Malloc2", 00:04:27.373 "aliases": [ 00:04:27.373 "86020caa-e37c-4c00-aae1-c2facfd5891c" 00:04:27.373 ], 00:04:27.373 "product_name": "Malloc disk", 00:04:27.373 "block_size": 512, 00:04:27.373 "num_blocks": 16384, 00:04:27.373 "uuid": "86020caa-e37c-4c00-aae1-c2facfd5891c", 00:04:27.373 "assigned_rate_limits": { 00:04:27.373 "rw_ios_per_sec": 0, 00:04:27.373 "rw_mbytes_per_sec": 0, 00:04:27.373 "r_mbytes_per_sec": 0, 00:04:27.373 "w_mbytes_per_sec": 0 00:04:27.373 }, 00:04:27.373 "claimed": false, 00:04:27.373 "zoned": false, 00:04:27.373 "supported_io_types": { 00:04:27.373 "read": true, 00:04:27.373 "write": true, 00:04:27.373 "unmap": true, 00:04:27.373 "flush": true, 00:04:27.373 "reset": true, 00:04:27.373 "nvme_admin": false, 00:04:27.373 "nvme_io": false, 00:04:27.373 "nvme_io_md": false, 00:04:27.373 "write_zeroes": true, 00:04:27.373 "zcopy": true, 00:04:27.373 "get_zone_info": false, 00:04:27.373 "zone_management": false, 00:04:27.373 "zone_append": false, 00:04:27.373 "compare": false, 00:04:27.373 "compare_and_write": false, 00:04:27.373 "abort": true, 00:04:27.373 "seek_hole": false, 00:04:27.373 "seek_data": false, 00:04:27.373 "copy": true, 00:04:27.373 "nvme_iov_md": false 00:04:27.373 }, 00:04:27.373 "memory_domains": [ 00:04:27.373 { 00:04:27.373 "dma_device_id": "system", 00:04:27.373 "dma_device_type": 1 00:04:27.373 }, 00:04:27.373 { 00:04:27.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.373 "dma_device_type": 2 00:04:27.373 } 00:04:27.373 ], 00:04:27.373 "driver_specific": {} 00:04:27.373 } 00:04:27.373 ]' 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.373 [2024-12-16 12:16:34.400114] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:27.373 [2024-12-16 12:16:34.400197] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:27.373 [2024-12-16 12:16:34.400222] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:27.373 [2024-12-16 12:16:34.400247] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:27.373 [2024-12-16 12:16:34.402649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:27.373 [2024-12-16 12:16:34.402707] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:27.373 Passthru0 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.373 12:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:27.373 { 00:04:27.373 "name": "Malloc2", 00:04:27.373 "aliases": [ 00:04:27.373 "86020caa-e37c-4c00-aae1-c2facfd5891c" 00:04:27.373 ], 00:04:27.373 "product_name": "Malloc disk", 00:04:27.373 "block_size": 512, 00:04:27.373 "num_blocks": 16384, 00:04:27.373 "uuid": "86020caa-e37c-4c00-aae1-c2facfd5891c", 00:04:27.373 "assigned_rate_limits": { 00:04:27.373 "rw_ios_per_sec": 0, 00:04:27.373 "rw_mbytes_per_sec": 0, 00:04:27.373 "r_mbytes_per_sec": 0, 00:04:27.373 "w_mbytes_per_sec": 0 00:04:27.373 }, 00:04:27.373 "claimed": true, 00:04:27.373 "claim_type": "exclusive_write", 00:04:27.373 "zoned": false, 00:04:27.373 "supported_io_types": { 00:04:27.373 "read": true, 00:04:27.373 "write": true, 00:04:27.373 "unmap": true, 00:04:27.373 "flush": true, 00:04:27.373 "reset": true, 00:04:27.373 "nvme_admin": false, 00:04:27.373 "nvme_io": false, 00:04:27.373 "nvme_io_md": false, 00:04:27.373 "write_zeroes": true, 00:04:27.373 "zcopy": true, 00:04:27.373 "get_zone_info": false, 00:04:27.373 "zone_management": false, 00:04:27.373 "zone_append": false, 00:04:27.373 "compare": false, 00:04:27.373 "compare_and_write": false, 00:04:27.373 "abort": true, 00:04:27.373 "seek_hole": false, 00:04:27.373 "seek_data": false, 00:04:27.373 "copy": true, 00:04:27.373 "nvme_iov_md": false 00:04:27.373 }, 00:04:27.373 "memory_domains": [ 00:04:27.373 { 00:04:27.373 "dma_device_id": "system", 00:04:27.373 "dma_device_type": 1 00:04:27.373 }, 00:04:27.373 { 00:04:27.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.373 "dma_device_type": 2 00:04:27.373 } 00:04:27.373 ], 00:04:27.373 "driver_specific": {} 00:04:27.373 }, 00:04:27.373 { 00:04:27.373 "name": "Passthru0", 00:04:27.373 "aliases": [ 00:04:27.373 "7c6ce417-7e26-575c-9c9b-51e733aa4087" 00:04:27.373 ], 00:04:27.373 "product_name": "passthru", 00:04:27.373 "block_size": 512, 00:04:27.373 "num_blocks": 16384, 00:04:27.373 "uuid": "7c6ce417-7e26-575c-9c9b-51e733aa4087", 00:04:27.373 "assigned_rate_limits": { 00:04:27.373 "rw_ios_per_sec": 0, 00:04:27.373 "rw_mbytes_per_sec": 0, 00:04:27.373 "r_mbytes_per_sec": 0, 00:04:27.373 "w_mbytes_per_sec": 0 00:04:27.373 }, 00:04:27.373 "claimed": false, 00:04:27.373 "zoned": false, 00:04:27.373 "supported_io_types": { 00:04:27.373 "read": true, 00:04:27.373 "write": true, 00:04:27.373 "unmap": true, 00:04:27.373 "flush": true, 00:04:27.373 "reset": true, 00:04:27.373 "nvme_admin": false, 00:04:27.373 "nvme_io": false, 00:04:27.373 "nvme_io_md": false, 00:04:27.373 "write_zeroes": true, 00:04:27.373 "zcopy": true, 00:04:27.373 "get_zone_info": false, 00:04:27.373 "zone_management": false, 00:04:27.373 "zone_append": false, 00:04:27.373 "compare": false, 00:04:27.373 "compare_and_write": false, 00:04:27.373 "abort": true, 00:04:27.373 "seek_hole": false, 00:04:27.373 "seek_data": false, 00:04:27.373 "copy": true, 00:04:27.374 "nvme_iov_md": false 00:04:27.374 }, 00:04:27.374 "memory_domains": [ 00:04:27.374 { 00:04:27.374 "dma_device_id": "system", 00:04:27.374 "dma_device_type": 1 00:04:27.374 }, 00:04:27.374 { 00:04:27.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.374 "dma_device_type": 2 00:04:27.374 } 00:04:27.374 ], 00:04:27.374 "driver_specific": { 00:04:27.374 "passthru": { 00:04:27.374 "name": "Passthru0", 00:04:27.374 "base_bdev_name": "Malloc2" 00:04:27.374 } 00:04:27.374 } 00:04:27.374 } 00:04:27.374 ]' 00:04:27.374 12:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:27.374 12:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:27.374 12:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:27.374 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.374 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.374 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.374 12:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:27.374 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.374 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.635 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.635 12:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:27.635 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.635 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.635 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.635 12:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:27.635 12:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:27.635 ************************************ 00:04:27.635 END TEST rpc_daemon_integrity 00:04:27.635 ************************************ 00:04:27.635 12:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:27.635 00:04:27.635 real 0m0.258s 00:04:27.635 user 0m0.144s 00:04:27.635 sys 0m0.026s 00:04:27.635 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.635 12:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.635 12:16:34 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:27.635 12:16:34 rpc -- rpc/rpc.sh@84 -- # killprocess 58942 00:04:27.635 12:16:34 rpc -- common/autotest_common.sh@954 -- # '[' -z 58942 ']' 00:04:27.635 12:16:34 rpc -- common/autotest_common.sh@958 -- # kill -0 58942 00:04:27.635 12:16:34 rpc -- common/autotest_common.sh@959 -- # uname 00:04:27.635 12:16:34 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:27.635 12:16:34 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58942 00:04:27.635 killing process with pid 58942 00:04:27.635 12:16:34 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:27.635 12:16:34 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:27.635 12:16:34 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58942' 00:04:27.635 12:16:34 rpc -- common/autotest_common.sh@973 -- # kill 58942 00:04:27.635 12:16:34 rpc -- common/autotest_common.sh@978 -- # wait 58942 00:04:29.552 ************************************ 00:04:29.552 END TEST rpc 00:04:29.552 ************************************ 00:04:29.552 00:04:29.552 real 0m3.950s 00:04:29.552 user 0m4.275s 00:04:29.552 sys 0m0.722s 00:04:29.552 12:16:36 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.552 12:16:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.552 12:16:36 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:29.552 12:16:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.552 12:16:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.552 12:16:36 -- common/autotest_common.sh@10 -- # set +x 00:04:29.552 ************************************ 00:04:29.552 START TEST skip_rpc 00:04:29.552 ************************************ 00:04:29.552 12:16:36 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:29.552 * Looking for test storage... 00:04:29.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:29.552 12:16:36 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:29.552 12:16:36 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:29.552 12:16:36 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:29.552 12:16:36 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.552 12:16:36 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:29.552 12:16:36 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.552 12:16:36 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:29.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.552 --rc genhtml_branch_coverage=1 00:04:29.552 --rc genhtml_function_coverage=1 00:04:29.552 --rc genhtml_legend=1 00:04:29.552 --rc geninfo_all_blocks=1 00:04:29.552 --rc geninfo_unexecuted_blocks=1 00:04:29.552 00:04:29.552 ' 00:04:29.552 12:16:36 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:29.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.552 --rc genhtml_branch_coverage=1 00:04:29.552 --rc genhtml_function_coverage=1 00:04:29.552 --rc genhtml_legend=1 00:04:29.552 --rc geninfo_all_blocks=1 00:04:29.552 --rc geninfo_unexecuted_blocks=1 00:04:29.552 00:04:29.552 ' 00:04:29.552 12:16:36 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:29.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.552 --rc genhtml_branch_coverage=1 00:04:29.552 --rc genhtml_function_coverage=1 00:04:29.552 --rc genhtml_legend=1 00:04:29.552 --rc geninfo_all_blocks=1 00:04:29.552 --rc geninfo_unexecuted_blocks=1 00:04:29.552 00:04:29.552 ' 00:04:29.552 12:16:36 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:29.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.552 --rc genhtml_branch_coverage=1 00:04:29.552 --rc genhtml_function_coverage=1 00:04:29.552 --rc genhtml_legend=1 00:04:29.552 --rc geninfo_all_blocks=1 00:04:29.552 --rc geninfo_unexecuted_blocks=1 00:04:29.552 00:04:29.552 ' 00:04:29.552 12:16:36 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:29.552 12:16:36 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:29.552 12:16:36 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:29.552 12:16:36 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.552 12:16:36 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.552 12:16:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.552 ************************************ 00:04:29.552 START TEST skip_rpc 00:04:29.552 ************************************ 00:04:29.552 12:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:29.552 12:16:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59160 00:04:29.552 12:16:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.552 12:16:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:29.552 12:16:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:29.552 [2024-12-16 12:16:36.601263] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:29.552 [2024-12-16 12:16:36.601403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59160 ] 00:04:29.814 [2024-12-16 12:16:36.765569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.814 [2024-12-16 12:16:36.888228] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59160 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 59160 ']' 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 59160 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59160 00:04:35.094 killing process with pid 59160 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59160' 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 59160 00:04:35.094 12:16:41 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 59160 00:04:35.663 ************************************ 00:04:35.663 END TEST skip_rpc 00:04:35.663 ************************************ 00:04:35.663 00:04:35.663 real 0m6.209s 00:04:35.663 user 0m5.723s 00:04:35.663 sys 0m0.378s 00:04:35.663 12:16:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.663 12:16:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.922 12:16:42 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:35.922 12:16:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.922 12:16:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.922 12:16:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.922 ************************************ 00:04:35.922 START TEST skip_rpc_with_json 00:04:35.922 ************************************ 00:04:35.922 12:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:35.922 12:16:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:35.922 12:16:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59253 00:04:35.922 12:16:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.922 12:16:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59253 00:04:35.922 12:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 59253 ']' 00:04:35.922 12:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.922 12:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.922 12:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.922 12:16:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:35.922 12:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.922 12:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:35.922 [2024-12-16 12:16:42.847293] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:35.922 [2024-12-16 12:16:42.847387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59253 ] 00:04:35.922 [2024-12-16 12:16:42.986815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.181 [2024-12-16 12:16:43.066199] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.747 12:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.747 12:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:36.747 12:16:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:36.747 12:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.747 12:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.747 [2024-12-16 12:16:43.697221] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:36.747 request: 00:04:36.747 { 00:04:36.747 "trtype": "tcp", 00:04:36.747 "method": "nvmf_get_transports", 00:04:36.747 "req_id": 1 00:04:36.747 } 00:04:36.747 Got JSON-RPC error response 00:04:36.747 response: 00:04:36.747 { 00:04:36.747 "code": -19, 00:04:36.747 "message": "No such device" 00:04:36.747 } 00:04:36.747 12:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:36.747 12:16:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:36.747 12:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.747 12:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.747 [2024-12-16 12:16:43.709309] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:36.747 12:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.747 12:16:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:36.747 12:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.747 12:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.006 12:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.006 12:16:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:37.006 { 00:04:37.006 "subsystems": [ 00:04:37.006 { 00:04:37.006 "subsystem": "fsdev", 00:04:37.006 "config": [ 00:04:37.006 { 00:04:37.006 "method": "fsdev_set_opts", 00:04:37.006 "params": { 00:04:37.006 "fsdev_io_pool_size": 65535, 00:04:37.006 "fsdev_io_cache_size": 256 00:04:37.006 } 00:04:37.006 } 00:04:37.006 ] 00:04:37.006 }, 00:04:37.006 { 00:04:37.006 "subsystem": "keyring", 00:04:37.006 "config": [] 00:04:37.006 }, 00:04:37.006 { 00:04:37.006 "subsystem": "iobuf", 00:04:37.006 "config": [ 00:04:37.006 { 00:04:37.006 "method": "iobuf_set_options", 00:04:37.006 "params": { 00:04:37.006 "small_pool_count": 8192, 00:04:37.006 "large_pool_count": 1024, 00:04:37.006 "small_bufsize": 8192, 00:04:37.006 "large_bufsize": 135168, 00:04:37.006 "enable_numa": false 00:04:37.006 } 00:04:37.006 } 00:04:37.006 ] 00:04:37.006 }, 00:04:37.006 { 00:04:37.006 "subsystem": "sock", 00:04:37.006 "config": [ 00:04:37.006 { 00:04:37.006 "method": "sock_set_default_impl", 00:04:37.006 "params": { 00:04:37.006 "impl_name": "posix" 00:04:37.006 } 00:04:37.006 }, 00:04:37.006 { 00:04:37.006 "method": "sock_impl_set_options", 00:04:37.006 "params": { 00:04:37.006 "impl_name": "ssl", 00:04:37.006 "recv_buf_size": 4096, 00:04:37.006 "send_buf_size": 4096, 00:04:37.006 "enable_recv_pipe": true, 00:04:37.006 "enable_quickack": false, 00:04:37.006 "enable_placement_id": 0, 00:04:37.006 "enable_zerocopy_send_server": true, 00:04:37.006 "enable_zerocopy_send_client": false, 00:04:37.006 "zerocopy_threshold": 0, 00:04:37.006 "tls_version": 0, 00:04:37.006 "enable_ktls": false 00:04:37.006 } 00:04:37.006 }, 00:04:37.006 { 00:04:37.006 "method": "sock_impl_set_options", 00:04:37.006 "params": { 00:04:37.006 "impl_name": "posix", 00:04:37.006 "recv_buf_size": 2097152, 00:04:37.006 "send_buf_size": 2097152, 00:04:37.006 "enable_recv_pipe": true, 00:04:37.006 "enable_quickack": false, 00:04:37.006 "enable_placement_id": 0, 00:04:37.006 "enable_zerocopy_send_server": true, 00:04:37.006 "enable_zerocopy_send_client": false, 00:04:37.006 "zerocopy_threshold": 0, 00:04:37.006 "tls_version": 0, 00:04:37.006 "enable_ktls": false 00:04:37.006 } 00:04:37.006 } 00:04:37.006 ] 00:04:37.006 }, 00:04:37.006 { 00:04:37.006 "subsystem": "vmd", 00:04:37.006 "config": [] 00:04:37.006 }, 00:04:37.006 { 00:04:37.006 "subsystem": "accel", 00:04:37.006 "config": [ 00:04:37.006 { 00:04:37.006 "method": "accel_set_options", 00:04:37.006 "params": { 00:04:37.006 "small_cache_size": 128, 00:04:37.006 "large_cache_size": 16, 00:04:37.006 "task_count": 2048, 00:04:37.006 "sequence_count": 2048, 00:04:37.006 "buf_count": 2048 00:04:37.006 } 00:04:37.006 } 00:04:37.006 ] 00:04:37.006 }, 00:04:37.006 { 00:04:37.006 "subsystem": "bdev", 00:04:37.006 "config": [ 00:04:37.006 { 00:04:37.006 "method": "bdev_set_options", 00:04:37.006 "params": { 00:04:37.006 "bdev_io_pool_size": 65535, 00:04:37.006 "bdev_io_cache_size": 256, 00:04:37.006 "bdev_auto_examine": true, 00:04:37.006 "iobuf_small_cache_size": 128, 00:04:37.006 "iobuf_large_cache_size": 16 00:04:37.006 } 00:04:37.006 }, 00:04:37.006 { 00:04:37.006 "method": "bdev_raid_set_options", 00:04:37.006 "params": { 00:04:37.006 "process_window_size_kb": 1024, 00:04:37.006 "process_max_bandwidth_mb_sec": 0 00:04:37.006 } 00:04:37.006 }, 00:04:37.006 { 00:04:37.006 "method": "bdev_iscsi_set_options", 00:04:37.006 "params": { 00:04:37.006 "timeout_sec": 30 00:04:37.006 } 00:04:37.006 }, 00:04:37.006 { 00:04:37.006 "method": "bdev_nvme_set_options", 00:04:37.006 "params": { 00:04:37.006 "action_on_timeout": "none", 00:04:37.006 "timeout_us": 0, 00:04:37.006 "timeout_admin_us": 0, 00:04:37.006 "keep_alive_timeout_ms": 10000, 00:04:37.006 "arbitration_burst": 0, 00:04:37.006 "low_priority_weight": 0, 00:04:37.006 "medium_priority_weight": 0, 00:04:37.006 "high_priority_weight": 0, 00:04:37.006 "nvme_adminq_poll_period_us": 10000, 00:04:37.006 "nvme_ioq_poll_period_us": 0, 00:04:37.006 "io_queue_requests": 0, 00:04:37.006 "delay_cmd_submit": true, 00:04:37.006 "transport_retry_count": 4, 00:04:37.006 "bdev_retry_count": 3, 00:04:37.006 "transport_ack_timeout": 0, 00:04:37.006 "ctrlr_loss_timeout_sec": 0, 00:04:37.006 "reconnect_delay_sec": 0, 00:04:37.006 "fast_io_fail_timeout_sec": 0, 00:04:37.006 "disable_auto_failback": false, 00:04:37.006 "generate_uuids": false, 00:04:37.006 "transport_tos": 0, 00:04:37.006 "nvme_error_stat": false, 00:04:37.006 "rdma_srq_size": 0, 00:04:37.006 "io_path_stat": false, 00:04:37.006 "allow_accel_sequence": false, 00:04:37.006 "rdma_max_cq_size": 0, 00:04:37.006 "rdma_cm_event_timeout_ms": 0, 00:04:37.006 "dhchap_digests": [ 00:04:37.006 "sha256", 00:04:37.006 "sha384", 00:04:37.006 "sha512" 00:04:37.006 ], 00:04:37.006 "dhchap_dhgroups": [ 00:04:37.006 "null", 00:04:37.006 "ffdhe2048", 00:04:37.006 "ffdhe3072", 00:04:37.006 "ffdhe4096", 00:04:37.006 "ffdhe6144", 00:04:37.006 "ffdhe8192" 00:04:37.006 ], 00:04:37.006 "rdma_umr_per_io": false 00:04:37.006 } 00:04:37.006 }, 00:04:37.006 { 00:04:37.006 "method": "bdev_nvme_set_hotplug", 00:04:37.006 "params": { 00:04:37.006 "period_us": 100000, 00:04:37.006 "enable": false 00:04:37.006 } 00:04:37.006 }, 00:04:37.006 { 00:04:37.006 "method": "bdev_wait_for_examine" 00:04:37.006 } 00:04:37.006 ] 00:04:37.006 }, 00:04:37.006 { 00:04:37.006 "subsystem": "scsi", 00:04:37.006 "config": null 00:04:37.006 }, 00:04:37.006 { 00:04:37.006 "subsystem": "scheduler", 00:04:37.006 "config": [ 00:04:37.006 { 00:04:37.006 "method": "framework_set_scheduler", 00:04:37.006 "params": { 00:04:37.006 "name": "static" 00:04:37.006 } 00:04:37.006 } 00:04:37.006 ] 00:04:37.006 }, 00:04:37.006 { 00:04:37.006 "subsystem": "vhost_scsi", 00:04:37.006 "config": [] 00:04:37.006 }, 00:04:37.006 { 00:04:37.006 "subsystem": "vhost_blk", 00:04:37.006 "config": [] 00:04:37.006 }, 00:04:37.006 { 00:04:37.006 "subsystem": "ublk", 00:04:37.006 "config": [] 00:04:37.006 }, 00:04:37.006 { 00:04:37.006 "subsystem": "nbd", 00:04:37.006 "config": [] 00:04:37.006 }, 00:04:37.006 { 00:04:37.006 "subsystem": "nvmf", 00:04:37.006 "config": [ 00:04:37.006 { 00:04:37.006 "method": "nvmf_set_config", 00:04:37.006 "params": { 00:04:37.006 "discovery_filter": "match_any", 00:04:37.006 "admin_cmd_passthru": { 00:04:37.006 "identify_ctrlr": false 00:04:37.006 }, 00:04:37.006 "dhchap_digests": [ 00:04:37.006 "sha256", 00:04:37.006 "sha384", 00:04:37.006 "sha512" 00:04:37.006 ], 00:04:37.006 "dhchap_dhgroups": [ 00:04:37.006 "null", 00:04:37.006 "ffdhe2048", 00:04:37.006 "ffdhe3072", 00:04:37.006 "ffdhe4096", 00:04:37.006 "ffdhe6144", 00:04:37.006 "ffdhe8192" 00:04:37.006 ] 00:04:37.006 } 00:04:37.006 }, 00:04:37.006 { 00:04:37.006 "method": "nvmf_set_max_subsystems", 00:04:37.006 "params": { 00:04:37.006 "max_subsystems": 1024 00:04:37.006 } 00:04:37.006 }, 00:04:37.006 { 00:04:37.006 "method": "nvmf_set_crdt", 00:04:37.006 "params": { 00:04:37.006 "crdt1": 0, 00:04:37.006 "crdt2": 0, 00:04:37.006 "crdt3": 0 00:04:37.006 } 00:04:37.006 }, 00:04:37.006 { 00:04:37.006 "method": "nvmf_create_transport", 00:04:37.006 "params": { 00:04:37.007 "trtype": "TCP", 00:04:37.007 "max_queue_depth": 128, 00:04:37.007 "max_io_qpairs_per_ctrlr": 127, 00:04:37.007 "in_capsule_data_size": 4096, 00:04:37.007 "max_io_size": 131072, 00:04:37.007 "io_unit_size": 131072, 00:04:37.007 "max_aq_depth": 128, 00:04:37.007 "num_shared_buffers": 511, 00:04:37.007 "buf_cache_size": 4294967295, 00:04:37.007 "dif_insert_or_strip": false, 00:04:37.007 "zcopy": false, 00:04:37.007 "c2h_success": true, 00:04:37.007 "sock_priority": 0, 00:04:37.007 "abort_timeout_sec": 1, 00:04:37.007 "ack_timeout": 0, 00:04:37.007 "data_wr_pool_size": 0 00:04:37.007 } 00:04:37.007 } 00:04:37.007 ] 00:04:37.007 }, 00:04:37.007 { 00:04:37.007 "subsystem": "iscsi", 00:04:37.007 "config": [ 00:04:37.007 { 00:04:37.007 "method": "iscsi_set_options", 00:04:37.007 "params": { 00:04:37.007 "node_base": "iqn.2016-06.io.spdk", 00:04:37.007 "max_sessions": 128, 00:04:37.007 "max_connections_per_session": 2, 00:04:37.007 "max_queue_depth": 64, 00:04:37.007 "default_time2wait": 2, 00:04:37.007 "default_time2retain": 20, 00:04:37.007 "first_burst_length": 8192, 00:04:37.007 "immediate_data": true, 00:04:37.007 "allow_duplicated_isid": false, 00:04:37.007 "error_recovery_level": 0, 00:04:37.007 "nop_timeout": 60, 00:04:37.007 "nop_in_interval": 30, 00:04:37.007 "disable_chap": false, 00:04:37.007 "require_chap": false, 00:04:37.007 "mutual_chap": false, 00:04:37.007 "chap_group": 0, 00:04:37.007 "max_large_datain_per_connection": 64, 00:04:37.007 "max_r2t_per_connection": 4, 00:04:37.007 "pdu_pool_size": 36864, 00:04:37.007 "immediate_data_pool_size": 16384, 00:04:37.007 "data_out_pool_size": 2048 00:04:37.007 } 00:04:37.007 } 00:04:37.007 ] 00:04:37.007 } 00:04:37.007 ] 00:04:37.007 } 00:04:37.007 12:16:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:37.007 12:16:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59253 00:04:37.007 12:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59253 ']' 00:04:37.007 12:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59253 00:04:37.007 12:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:37.007 12:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.007 12:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59253 00:04:37.007 12:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.007 killing process with pid 59253 00:04:37.007 12:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.007 12:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59253' 00:04:37.007 12:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59253 00:04:37.007 12:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59253 00:04:38.384 12:16:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59293 00:04:38.384 12:16:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:38.384 12:16:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:43.657 12:16:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59293 00:04:43.657 12:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59293 ']' 00:04:43.657 12:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59293 00:04:43.657 12:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:43.657 12:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.657 12:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59293 00:04:43.657 12:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.657 12:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.657 killing process with pid 59293 00:04:43.657 12:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59293' 00:04:43.657 12:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59293 00:04:43.657 12:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59293 00:04:44.260 12:16:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:44.260 12:16:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:44.260 00:04:44.260 real 0m8.472s 00:04:44.260 user 0m8.034s 00:04:44.260 sys 0m0.652s 00:04:44.260 ************************************ 00:04:44.260 END TEST skip_rpc_with_json 00:04:44.260 ************************************ 00:04:44.260 12:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.260 12:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.260 12:16:51 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:44.260 12:16:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.260 12:16:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.260 12:16:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.260 ************************************ 00:04:44.260 START TEST skip_rpc_with_delay 00:04:44.260 ************************************ 00:04:44.260 12:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:44.260 12:16:51 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:44.260 12:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:44.260 12:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:44.260 12:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.260 12:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.260 12:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.260 12:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.260 12:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.260 12:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.260 12:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.260 12:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:44.260 12:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:44.522 [2024-12-16 12:16:51.379393] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:44.522 12:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:44.522 ************************************ 00:04:44.522 END TEST skip_rpc_with_delay 00:04:44.522 ************************************ 00:04:44.522 12:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:44.522 12:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:44.522 12:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:44.522 00:04:44.522 real 0m0.123s 00:04:44.522 user 0m0.069s 00:04:44.522 sys 0m0.052s 00:04:44.522 12:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.522 12:16:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:44.522 12:16:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:44.522 12:16:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:44.522 12:16:51 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:44.522 12:16:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.522 12:16:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.522 12:16:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.522 ************************************ 00:04:44.522 START TEST exit_on_failed_rpc_init 00:04:44.522 ************************************ 00:04:44.522 12:16:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:44.522 12:16:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59410 00:04:44.522 12:16:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59410 00:04:44.522 12:16:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59410 ']' 00:04:44.522 12:16:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.522 12:16:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.522 12:16:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.522 12:16:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.522 12:16:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:44.522 12:16:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.522 [2024-12-16 12:16:51.557182] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:44.522 [2024-12-16 12:16:51.557299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59410 ] 00:04:44.781 [2024-12-16 12:16:51.713443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.781 [2024-12-16 12:16:51.789876] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.346 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.346 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:45.346 12:16:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.346 12:16:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:45.346 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:45.347 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:45.347 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.347 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.347 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.347 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.347 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.347 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.347 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.347 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:45.347 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:45.604 [2024-12-16 12:16:52.464725] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:45.604 [2024-12-16 12:16:52.464840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59428 ] 00:04:45.604 [2024-12-16 12:16:52.625043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.863 [2024-12-16 12:16:52.717428] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.863 [2024-12-16 12:16:52.717494] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:45.863 [2024-12-16 12:16:52.717507] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:45.863 [2024-12-16 12:16:52.717519] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:45.863 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:45.863 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:45.863 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:45.863 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:45.863 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:45.863 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:45.863 12:16:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:45.863 12:16:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59410 00:04:45.863 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59410 ']' 00:04:45.863 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59410 00:04:45.863 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:45.863 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.863 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59410 00:04:45.863 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.863 killing process with pid 59410 00:04:45.863 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.863 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59410' 00:04:45.863 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59410 00:04:45.863 12:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59410 00:04:47.240 00:04:47.240 real 0m2.608s 00:04:47.240 user 0m2.923s 00:04:47.240 sys 0m0.382s 00:04:47.240 12:16:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.240 ************************************ 00:04:47.240 END TEST exit_on_failed_rpc_init 00:04:47.240 ************************************ 00:04:47.240 12:16:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:47.240 12:16:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:47.240 00:04:47.240 real 0m17.782s 00:04:47.240 user 0m16.889s 00:04:47.240 sys 0m1.640s 00:04:47.240 12:16:54 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.240 ************************************ 00:04:47.240 END TEST skip_rpc 00:04:47.240 ************************************ 00:04:47.240 12:16:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.240 12:16:54 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:47.240 12:16:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.240 12:16:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.240 12:16:54 -- common/autotest_common.sh@10 -- # set +x 00:04:47.240 ************************************ 00:04:47.240 START TEST rpc_client 00:04:47.240 ************************************ 00:04:47.240 12:16:54 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:47.240 * Looking for test storage... 00:04:47.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:47.240 12:16:54 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:47.240 12:16:54 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:47.240 12:16:54 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:47.240 12:16:54 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.240 12:16:54 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:47.240 12:16:54 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.240 12:16:54 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:47.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.240 --rc genhtml_branch_coverage=1 00:04:47.240 --rc genhtml_function_coverage=1 00:04:47.240 --rc genhtml_legend=1 00:04:47.240 --rc geninfo_all_blocks=1 00:04:47.240 --rc geninfo_unexecuted_blocks=1 00:04:47.240 00:04:47.240 ' 00:04:47.240 12:16:54 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:47.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.240 --rc genhtml_branch_coverage=1 00:04:47.240 --rc genhtml_function_coverage=1 00:04:47.240 --rc genhtml_legend=1 00:04:47.240 --rc geninfo_all_blocks=1 00:04:47.240 --rc geninfo_unexecuted_blocks=1 00:04:47.240 00:04:47.240 ' 00:04:47.240 12:16:54 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:47.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.240 --rc genhtml_branch_coverage=1 00:04:47.240 --rc genhtml_function_coverage=1 00:04:47.240 --rc genhtml_legend=1 00:04:47.240 --rc geninfo_all_blocks=1 00:04:47.240 --rc geninfo_unexecuted_blocks=1 00:04:47.240 00:04:47.240 ' 00:04:47.240 12:16:54 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:47.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.240 --rc genhtml_branch_coverage=1 00:04:47.240 --rc genhtml_function_coverage=1 00:04:47.240 --rc genhtml_legend=1 00:04:47.240 --rc geninfo_all_blocks=1 00:04:47.240 --rc geninfo_unexecuted_blocks=1 00:04:47.240 00:04:47.240 ' 00:04:47.240 12:16:54 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:47.500 OK 00:04:47.500 12:16:54 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:47.500 00:04:47.500 real 0m0.177s 00:04:47.500 user 0m0.104s 00:04:47.500 sys 0m0.078s 00:04:47.500 ************************************ 00:04:47.500 END TEST rpc_client 00:04:47.500 ************************************ 00:04:47.500 12:16:54 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.500 12:16:54 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:47.500 12:16:54 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:47.500 12:16:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.500 12:16:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.500 12:16:54 -- common/autotest_common.sh@10 -- # set +x 00:04:47.500 ************************************ 00:04:47.500 START TEST json_config 00:04:47.500 ************************************ 00:04:47.501 12:16:54 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:47.501 12:16:54 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:47.501 12:16:54 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:47.501 12:16:54 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:47.501 12:16:54 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:47.501 12:16:54 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.501 12:16:54 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.501 12:16:54 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.501 12:16:54 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.501 12:16:54 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.501 12:16:54 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.501 12:16:54 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.501 12:16:54 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.501 12:16:54 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.501 12:16:54 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.501 12:16:54 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.501 12:16:54 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:47.501 12:16:54 json_config -- scripts/common.sh@345 -- # : 1 00:04:47.501 12:16:54 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.501 12:16:54 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.501 12:16:54 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:47.501 12:16:54 json_config -- scripts/common.sh@353 -- # local d=1 00:04:47.501 12:16:54 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.501 12:16:54 json_config -- scripts/common.sh@355 -- # echo 1 00:04:47.501 12:16:54 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.501 12:16:54 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:47.501 12:16:54 json_config -- scripts/common.sh@353 -- # local d=2 00:04:47.501 12:16:54 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.501 12:16:54 json_config -- scripts/common.sh@355 -- # echo 2 00:04:47.501 12:16:54 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.501 12:16:54 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.501 12:16:54 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.501 12:16:54 json_config -- scripts/common.sh@368 -- # return 0 00:04:47.501 12:16:54 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.501 12:16:54 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:47.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.501 --rc genhtml_branch_coverage=1 00:04:47.501 --rc genhtml_function_coverage=1 00:04:47.501 --rc genhtml_legend=1 00:04:47.501 --rc geninfo_all_blocks=1 00:04:47.501 --rc geninfo_unexecuted_blocks=1 00:04:47.501 00:04:47.501 ' 00:04:47.501 12:16:54 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:47.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.501 --rc genhtml_branch_coverage=1 00:04:47.501 --rc genhtml_function_coverage=1 00:04:47.501 --rc genhtml_legend=1 00:04:47.501 --rc geninfo_all_blocks=1 00:04:47.501 --rc geninfo_unexecuted_blocks=1 00:04:47.501 00:04:47.501 ' 00:04:47.501 12:16:54 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:47.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.501 --rc genhtml_branch_coverage=1 00:04:47.501 --rc genhtml_function_coverage=1 00:04:47.501 --rc genhtml_legend=1 00:04:47.501 --rc geninfo_all_blocks=1 00:04:47.501 --rc geninfo_unexecuted_blocks=1 00:04:47.501 00:04:47.501 ' 00:04:47.501 12:16:54 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:47.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.501 --rc genhtml_branch_coverage=1 00:04:47.501 --rc genhtml_function_coverage=1 00:04:47.501 --rc genhtml_legend=1 00:04:47.501 --rc geninfo_all_blocks=1 00:04:47.501 --rc geninfo_unexecuted_blocks=1 00:04:47.501 00:04:47.501 ' 00:04:47.501 12:16:54 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63e07393-7b56-4cb3-b844-5ae779d86e1b 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=63e07393-7b56-4cb3-b844-5ae779d86e1b 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:47.501 12:16:54 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.501 12:16:54 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.501 12:16:54 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.501 12:16:54 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.501 12:16:54 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.501 12:16:54 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.501 12:16:54 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.501 12:16:54 json_config -- paths/export.sh@5 -- # export PATH 00:04:47.501 12:16:54 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@51 -- # : 0 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:47.501 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:47.501 12:16:54 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:47.501 12:16:54 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:47.501 12:16:54 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:47.501 12:16:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:47.501 12:16:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:47.501 12:16:54 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:47.501 WARNING: No tests are enabled so not running JSON configuration tests 00:04:47.501 12:16:54 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:47.501 12:16:54 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:47.501 00:04:47.501 real 0m0.140s 00:04:47.501 user 0m0.094s 00:04:47.501 sys 0m0.048s 00:04:47.501 12:16:54 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.501 ************************************ 00:04:47.501 END TEST json_config 00:04:47.501 ************************************ 00:04:47.501 12:16:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.501 12:16:54 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:47.501 12:16:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.501 12:16:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.501 12:16:54 -- common/autotest_common.sh@10 -- # set +x 00:04:47.762 ************************************ 00:04:47.762 START TEST json_config_extra_key 00:04:47.762 ************************************ 00:04:47.762 12:16:54 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:47.762 12:16:54 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:47.762 12:16:54 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:47.762 12:16:54 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:47.762 12:16:54 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:47.762 12:16:54 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.762 12:16:54 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:47.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.762 --rc genhtml_branch_coverage=1 00:04:47.762 --rc genhtml_function_coverage=1 00:04:47.762 --rc genhtml_legend=1 00:04:47.762 --rc geninfo_all_blocks=1 00:04:47.762 --rc geninfo_unexecuted_blocks=1 00:04:47.762 00:04:47.762 ' 00:04:47.762 12:16:54 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:47.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.762 --rc genhtml_branch_coverage=1 00:04:47.762 --rc genhtml_function_coverage=1 00:04:47.762 --rc genhtml_legend=1 00:04:47.762 --rc geninfo_all_blocks=1 00:04:47.762 --rc geninfo_unexecuted_blocks=1 00:04:47.762 00:04:47.762 ' 00:04:47.762 12:16:54 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:47.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.762 --rc genhtml_branch_coverage=1 00:04:47.762 --rc genhtml_function_coverage=1 00:04:47.762 --rc genhtml_legend=1 00:04:47.762 --rc geninfo_all_blocks=1 00:04:47.762 --rc geninfo_unexecuted_blocks=1 00:04:47.762 00:04:47.762 ' 00:04:47.762 12:16:54 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:47.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.762 --rc genhtml_branch_coverage=1 00:04:47.762 --rc genhtml_function_coverage=1 00:04:47.762 --rc genhtml_legend=1 00:04:47.762 --rc geninfo_all_blocks=1 00:04:47.762 --rc geninfo_unexecuted_blocks=1 00:04:47.762 00:04:47.762 ' 00:04:47.762 12:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63e07393-7b56-4cb3-b844-5ae779d86e1b 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=63e07393-7b56-4cb3-b844-5ae779d86e1b 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.762 12:16:54 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.762 12:16:54 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.762 12:16:54 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.762 12:16:54 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.762 12:16:54 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:47.762 12:16:54 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:47.762 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:47.762 12:16:54 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:47.762 12:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:47.762 12:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:47.762 12:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:47.762 12:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:47.762 12:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:47.762 12:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:47.762 12:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:47.762 12:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:47.762 12:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:47.763 12:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:47.763 INFO: launching applications... 00:04:47.763 12:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:47.763 12:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:47.763 12:16:54 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:47.763 12:16:54 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:47.763 12:16:54 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:47.763 12:16:54 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:47.763 12:16:54 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:47.763 12:16:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:47.763 12:16:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:47.763 12:16:54 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59616 00:04:47.763 12:16:54 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:47.763 Waiting for target to run... 00:04:47.763 12:16:54 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59616 /var/tmp/spdk_tgt.sock 00:04:47.763 12:16:54 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59616 ']' 00:04:47.763 12:16:54 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:47.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:47.763 12:16:54 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.763 12:16:54 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:47.763 12:16:54 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.763 12:16:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:47.763 12:16:54 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:47.763 [2024-12-16 12:16:54.818430] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:47.763 [2024-12-16 12:16:54.818550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59616 ] 00:04:48.022 [2024-12-16 12:16:55.118468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.278 [2024-12-16 12:16:55.188065] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.843 12:16:55 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.843 12:16:55 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:48.843 12:16:55 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:48.843 00:04:48.843 INFO: shutting down applications... 00:04:48.843 12:16:55 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:48.843 12:16:55 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:48.843 12:16:55 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:48.843 12:16:55 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:48.844 12:16:55 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59616 ]] 00:04:48.844 12:16:55 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59616 00:04:48.844 12:16:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:48.844 12:16:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.844 12:16:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59616 00:04:48.844 12:16:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:49.102 12:16:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:49.102 12:16:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.102 12:16:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59616 00:04:49.102 12:16:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:49.667 12:16:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:49.667 12:16:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.667 12:16:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59616 00:04:49.667 12:16:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:50.238 12:16:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:50.238 12:16:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.238 12:16:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59616 00:04:50.238 12:16:57 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:50.238 12:16:57 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:50.238 12:16:57 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:50.238 SPDK target shutdown done 00:04:50.238 Success 00:04:50.238 12:16:57 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:50.238 12:16:57 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:50.238 00:04:50.238 real 0m2.562s 00:04:50.238 user 0m2.318s 00:04:50.238 sys 0m0.374s 00:04:50.238 ************************************ 00:04:50.238 END TEST json_config_extra_key 00:04:50.238 12:16:57 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.238 12:16:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:50.238 ************************************ 00:04:50.238 12:16:57 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:50.238 12:16:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.238 12:16:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.238 12:16:57 -- common/autotest_common.sh@10 -- # set +x 00:04:50.238 ************************************ 00:04:50.238 START TEST alias_rpc 00:04:50.238 ************************************ 00:04:50.238 12:16:57 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:50.238 * Looking for test storage... 00:04:50.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:50.238 12:16:57 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:50.238 12:16:57 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:50.238 12:16:57 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:50.496 12:16:57 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.496 12:16:57 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:50.496 12:16:57 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.496 12:16:57 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:50.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.496 --rc genhtml_branch_coverage=1 00:04:50.496 --rc genhtml_function_coverage=1 00:04:50.496 --rc genhtml_legend=1 00:04:50.496 --rc geninfo_all_blocks=1 00:04:50.496 --rc geninfo_unexecuted_blocks=1 00:04:50.496 00:04:50.496 ' 00:04:50.496 12:16:57 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:50.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.496 --rc genhtml_branch_coverage=1 00:04:50.496 --rc genhtml_function_coverage=1 00:04:50.496 --rc genhtml_legend=1 00:04:50.496 --rc geninfo_all_blocks=1 00:04:50.496 --rc geninfo_unexecuted_blocks=1 00:04:50.496 00:04:50.496 ' 00:04:50.496 12:16:57 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:50.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.496 --rc genhtml_branch_coverage=1 00:04:50.496 --rc genhtml_function_coverage=1 00:04:50.496 --rc genhtml_legend=1 00:04:50.496 --rc geninfo_all_blocks=1 00:04:50.496 --rc geninfo_unexecuted_blocks=1 00:04:50.496 00:04:50.496 ' 00:04:50.496 12:16:57 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:50.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.496 --rc genhtml_branch_coverage=1 00:04:50.496 --rc genhtml_function_coverage=1 00:04:50.496 --rc genhtml_legend=1 00:04:50.496 --rc geninfo_all_blocks=1 00:04:50.496 --rc geninfo_unexecuted_blocks=1 00:04:50.496 00:04:50.496 ' 00:04:50.496 12:16:57 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:50.496 12:16:57 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59708 00:04:50.496 12:16:57 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59708 00:04:50.496 12:16:57 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59708 ']' 00:04:50.496 12:16:57 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.496 12:16:57 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.496 12:16:57 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.496 12:16:57 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.496 12:16:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.496 12:16:57 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.496 [2024-12-16 12:16:57.437547] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:50.496 [2024-12-16 12:16:57.437664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59708 ] 00:04:50.496 [2024-12-16 12:16:57.590303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.754 [2024-12-16 12:16:57.666363] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.321 12:16:58 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.321 12:16:58 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:51.321 12:16:58 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:51.607 12:16:58 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59708 00:04:51.607 12:16:58 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59708 ']' 00:04:51.607 12:16:58 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59708 00:04:51.607 12:16:58 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:51.607 12:16:58 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.607 12:16:58 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59708 00:04:51.607 12:16:58 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.607 killing process with pid 59708 00:04:51.607 12:16:58 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.607 12:16:58 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59708' 00:04:51.607 12:16:58 alias_rpc -- common/autotest_common.sh@973 -- # kill 59708 00:04:51.607 12:16:58 alias_rpc -- common/autotest_common.sh@978 -- # wait 59708 00:04:52.984 00:04:52.984 real 0m2.431s 00:04:52.984 user 0m2.522s 00:04:52.984 sys 0m0.387s 00:04:52.984 12:16:59 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.984 ************************************ 00:04:52.984 END TEST alias_rpc 00:04:52.984 ************************************ 00:04:52.985 12:16:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.985 12:16:59 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:52.985 12:16:59 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:52.985 12:16:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.985 12:16:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.985 12:16:59 -- common/autotest_common.sh@10 -- # set +x 00:04:52.985 ************************************ 00:04:52.985 START TEST spdkcli_tcp 00:04:52.985 ************************************ 00:04:52.985 12:16:59 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:52.985 * Looking for test storage... 00:04:52.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:52.985 12:16:59 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:52.985 12:16:59 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:52.985 12:16:59 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:52.985 12:16:59 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.985 12:16:59 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:52.985 12:16:59 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.985 12:16:59 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:52.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.985 --rc genhtml_branch_coverage=1 00:04:52.985 --rc genhtml_function_coverage=1 00:04:52.985 --rc genhtml_legend=1 00:04:52.985 --rc geninfo_all_blocks=1 00:04:52.985 --rc geninfo_unexecuted_blocks=1 00:04:52.985 00:04:52.985 ' 00:04:52.985 12:16:59 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:52.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.985 --rc genhtml_branch_coverage=1 00:04:52.985 --rc genhtml_function_coverage=1 00:04:52.985 --rc genhtml_legend=1 00:04:52.985 --rc geninfo_all_blocks=1 00:04:52.985 --rc geninfo_unexecuted_blocks=1 00:04:52.985 00:04:52.985 ' 00:04:52.985 12:16:59 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:52.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.985 --rc genhtml_branch_coverage=1 00:04:52.985 --rc genhtml_function_coverage=1 00:04:52.985 --rc genhtml_legend=1 00:04:52.985 --rc geninfo_all_blocks=1 00:04:52.985 --rc geninfo_unexecuted_blocks=1 00:04:52.985 00:04:52.985 ' 00:04:52.985 12:16:59 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:52.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.985 --rc genhtml_branch_coverage=1 00:04:52.985 --rc genhtml_function_coverage=1 00:04:52.985 --rc genhtml_legend=1 00:04:52.985 --rc geninfo_all_blocks=1 00:04:52.985 --rc geninfo_unexecuted_blocks=1 00:04:52.985 00:04:52.985 ' 00:04:52.985 12:16:59 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:52.985 12:16:59 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:52.985 12:16:59 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:52.985 12:16:59 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:52.985 12:16:59 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:52.985 12:16:59 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:52.985 12:16:59 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:52.985 12:16:59 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:52.985 12:16:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:52.985 12:16:59 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59793 00:04:52.985 12:16:59 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59793 00:04:52.985 12:16:59 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59793 ']' 00:04:52.985 12:16:59 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.985 12:16:59 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.985 12:16:59 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.985 12:16:59 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.985 12:16:59 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:52.985 12:16:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:52.985 [2024-12-16 12:16:59.939221] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:52.985 [2024-12-16 12:16:59.939328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59793 ] 00:04:53.246 [2024-12-16 12:17:00.100866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.246 [2024-12-16 12:17:00.200626] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.246 [2024-12-16 12:17:00.200702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.817 12:17:00 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.817 12:17:00 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:53.817 12:17:00 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59810 00:04:53.817 12:17:00 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:53.817 12:17:00 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:54.077 [ 00:04:54.077 "bdev_malloc_delete", 00:04:54.077 "bdev_malloc_create", 00:04:54.077 "bdev_null_resize", 00:04:54.077 "bdev_null_delete", 00:04:54.077 "bdev_null_create", 00:04:54.077 "bdev_nvme_cuse_unregister", 00:04:54.077 "bdev_nvme_cuse_register", 00:04:54.077 "bdev_opal_new_user", 00:04:54.077 "bdev_opal_set_lock_state", 00:04:54.077 "bdev_opal_delete", 00:04:54.077 "bdev_opal_get_info", 00:04:54.077 "bdev_opal_create", 00:04:54.077 "bdev_nvme_opal_revert", 00:04:54.077 "bdev_nvme_opal_init", 00:04:54.077 "bdev_nvme_send_cmd", 00:04:54.077 "bdev_nvme_set_keys", 00:04:54.077 "bdev_nvme_get_path_iostat", 00:04:54.077 "bdev_nvme_get_mdns_discovery_info", 00:04:54.077 "bdev_nvme_stop_mdns_discovery", 00:04:54.077 "bdev_nvme_start_mdns_discovery", 00:04:54.077 "bdev_nvme_set_multipath_policy", 00:04:54.077 "bdev_nvme_set_preferred_path", 00:04:54.077 "bdev_nvme_get_io_paths", 00:04:54.077 "bdev_nvme_remove_error_injection", 00:04:54.077 "bdev_nvme_add_error_injection", 00:04:54.077 "bdev_nvme_get_discovery_info", 00:04:54.077 "bdev_nvme_stop_discovery", 00:04:54.077 "bdev_nvme_start_discovery", 00:04:54.077 "bdev_nvme_get_controller_health_info", 00:04:54.077 "bdev_nvme_disable_controller", 00:04:54.077 "bdev_nvme_enable_controller", 00:04:54.077 "bdev_nvme_reset_controller", 00:04:54.077 "bdev_nvme_get_transport_statistics", 00:04:54.077 "bdev_nvme_apply_firmware", 00:04:54.077 "bdev_nvme_detach_controller", 00:04:54.077 "bdev_nvme_get_controllers", 00:04:54.077 "bdev_nvme_attach_controller", 00:04:54.077 "bdev_nvme_set_hotplug", 00:04:54.077 "bdev_nvme_set_options", 00:04:54.077 "bdev_passthru_delete", 00:04:54.077 "bdev_passthru_create", 00:04:54.077 "bdev_lvol_set_parent_bdev", 00:04:54.077 "bdev_lvol_set_parent", 00:04:54.077 "bdev_lvol_check_shallow_copy", 00:04:54.077 "bdev_lvol_start_shallow_copy", 00:04:54.077 "bdev_lvol_grow_lvstore", 00:04:54.077 "bdev_lvol_get_lvols", 00:04:54.077 "bdev_lvol_get_lvstores", 00:04:54.077 "bdev_lvol_delete", 00:04:54.077 "bdev_lvol_set_read_only", 00:04:54.077 "bdev_lvol_resize", 00:04:54.077 "bdev_lvol_decouple_parent", 00:04:54.077 "bdev_lvol_inflate", 00:04:54.077 "bdev_lvol_rename", 00:04:54.077 "bdev_lvol_clone_bdev", 00:04:54.077 "bdev_lvol_clone", 00:04:54.077 "bdev_lvol_snapshot", 00:04:54.077 "bdev_lvol_create", 00:04:54.077 "bdev_lvol_delete_lvstore", 00:04:54.077 "bdev_lvol_rename_lvstore", 00:04:54.077 "bdev_lvol_create_lvstore", 00:04:54.077 "bdev_raid_set_options", 00:04:54.077 "bdev_raid_remove_base_bdev", 00:04:54.077 "bdev_raid_add_base_bdev", 00:04:54.077 "bdev_raid_delete", 00:04:54.077 "bdev_raid_create", 00:04:54.077 "bdev_raid_get_bdevs", 00:04:54.077 "bdev_error_inject_error", 00:04:54.077 "bdev_error_delete", 00:04:54.077 "bdev_error_create", 00:04:54.077 "bdev_split_delete", 00:04:54.077 "bdev_split_create", 00:04:54.077 "bdev_delay_delete", 00:04:54.077 "bdev_delay_create", 00:04:54.077 "bdev_delay_update_latency", 00:04:54.077 "bdev_zone_block_delete", 00:04:54.077 "bdev_zone_block_create", 00:04:54.077 "blobfs_create", 00:04:54.077 "blobfs_detect", 00:04:54.078 "blobfs_set_cache_size", 00:04:54.078 "bdev_xnvme_delete", 00:04:54.078 "bdev_xnvme_create", 00:04:54.078 "bdev_aio_delete", 00:04:54.078 "bdev_aio_rescan", 00:04:54.078 "bdev_aio_create", 00:04:54.078 "bdev_ftl_set_property", 00:04:54.078 "bdev_ftl_get_properties", 00:04:54.078 "bdev_ftl_get_stats", 00:04:54.078 "bdev_ftl_unmap", 00:04:54.078 "bdev_ftl_unload", 00:04:54.078 "bdev_ftl_delete", 00:04:54.078 "bdev_ftl_load", 00:04:54.078 "bdev_ftl_create", 00:04:54.078 "bdev_virtio_attach_controller", 00:04:54.078 "bdev_virtio_scsi_get_devices", 00:04:54.078 "bdev_virtio_detach_controller", 00:04:54.078 "bdev_virtio_blk_set_hotplug", 00:04:54.078 "bdev_iscsi_delete", 00:04:54.078 "bdev_iscsi_create", 00:04:54.078 "bdev_iscsi_set_options", 00:04:54.078 "accel_error_inject_error", 00:04:54.078 "ioat_scan_accel_module", 00:04:54.078 "dsa_scan_accel_module", 00:04:54.078 "iaa_scan_accel_module", 00:04:54.078 "keyring_file_remove_key", 00:04:54.078 "keyring_file_add_key", 00:04:54.078 "keyring_linux_set_options", 00:04:54.078 "fsdev_aio_delete", 00:04:54.078 "fsdev_aio_create", 00:04:54.078 "iscsi_get_histogram", 00:04:54.078 "iscsi_enable_histogram", 00:04:54.078 "iscsi_set_options", 00:04:54.078 "iscsi_get_auth_groups", 00:04:54.078 "iscsi_auth_group_remove_secret", 00:04:54.078 "iscsi_auth_group_add_secret", 00:04:54.078 "iscsi_delete_auth_group", 00:04:54.078 "iscsi_create_auth_group", 00:04:54.078 "iscsi_set_discovery_auth", 00:04:54.078 "iscsi_get_options", 00:04:54.078 "iscsi_target_node_request_logout", 00:04:54.078 "iscsi_target_node_set_redirect", 00:04:54.078 "iscsi_target_node_set_auth", 00:04:54.078 "iscsi_target_node_add_lun", 00:04:54.078 "iscsi_get_stats", 00:04:54.078 "iscsi_get_connections", 00:04:54.078 "iscsi_portal_group_set_auth", 00:04:54.078 "iscsi_start_portal_group", 00:04:54.078 "iscsi_delete_portal_group", 00:04:54.078 "iscsi_create_portal_group", 00:04:54.078 "iscsi_get_portal_groups", 00:04:54.078 "iscsi_delete_target_node", 00:04:54.078 "iscsi_target_node_remove_pg_ig_maps", 00:04:54.078 "iscsi_target_node_add_pg_ig_maps", 00:04:54.078 "iscsi_create_target_node", 00:04:54.078 "iscsi_get_target_nodes", 00:04:54.078 "iscsi_delete_initiator_group", 00:04:54.078 "iscsi_initiator_group_remove_initiators", 00:04:54.078 "iscsi_initiator_group_add_initiators", 00:04:54.078 "iscsi_create_initiator_group", 00:04:54.078 "iscsi_get_initiator_groups", 00:04:54.078 "nvmf_set_crdt", 00:04:54.078 "nvmf_set_config", 00:04:54.078 "nvmf_set_max_subsystems", 00:04:54.078 "nvmf_stop_mdns_prr", 00:04:54.078 "nvmf_publish_mdns_prr", 00:04:54.078 "nvmf_subsystem_get_listeners", 00:04:54.078 "nvmf_subsystem_get_qpairs", 00:04:54.078 "nvmf_subsystem_get_controllers", 00:04:54.078 "nvmf_get_stats", 00:04:54.078 "nvmf_get_transports", 00:04:54.078 "nvmf_create_transport", 00:04:54.078 "nvmf_get_targets", 00:04:54.078 "nvmf_delete_target", 00:04:54.078 "nvmf_create_target", 00:04:54.078 "nvmf_subsystem_allow_any_host", 00:04:54.078 "nvmf_subsystem_set_keys", 00:04:54.078 "nvmf_subsystem_remove_host", 00:04:54.078 "nvmf_subsystem_add_host", 00:04:54.078 "nvmf_ns_remove_host", 00:04:54.078 "nvmf_ns_add_host", 00:04:54.078 "nvmf_subsystem_remove_ns", 00:04:54.078 "nvmf_subsystem_set_ns_ana_group", 00:04:54.078 "nvmf_subsystem_add_ns", 00:04:54.078 "nvmf_subsystem_listener_set_ana_state", 00:04:54.078 "nvmf_discovery_get_referrals", 00:04:54.078 "nvmf_discovery_remove_referral", 00:04:54.078 "nvmf_discovery_add_referral", 00:04:54.078 "nvmf_subsystem_remove_listener", 00:04:54.078 "nvmf_subsystem_add_listener", 00:04:54.078 "nvmf_delete_subsystem", 00:04:54.078 "nvmf_create_subsystem", 00:04:54.078 "nvmf_get_subsystems", 00:04:54.078 "env_dpdk_get_mem_stats", 00:04:54.078 "nbd_get_disks", 00:04:54.078 "nbd_stop_disk", 00:04:54.078 "nbd_start_disk", 00:04:54.078 "ublk_recover_disk", 00:04:54.078 "ublk_get_disks", 00:04:54.078 "ublk_stop_disk", 00:04:54.078 "ublk_start_disk", 00:04:54.078 "ublk_destroy_target", 00:04:54.078 "ublk_create_target", 00:04:54.078 "virtio_blk_create_transport", 00:04:54.078 "virtio_blk_get_transports", 00:04:54.078 "vhost_controller_set_coalescing", 00:04:54.078 "vhost_get_controllers", 00:04:54.078 "vhost_delete_controller", 00:04:54.078 "vhost_create_blk_controller", 00:04:54.078 "vhost_scsi_controller_remove_target", 00:04:54.078 "vhost_scsi_controller_add_target", 00:04:54.078 "vhost_start_scsi_controller", 00:04:54.078 "vhost_create_scsi_controller", 00:04:54.078 "thread_set_cpumask", 00:04:54.078 "scheduler_set_options", 00:04:54.078 "framework_get_governor", 00:04:54.078 "framework_get_scheduler", 00:04:54.078 "framework_set_scheduler", 00:04:54.078 "framework_get_reactors", 00:04:54.078 "thread_get_io_channels", 00:04:54.078 "thread_get_pollers", 00:04:54.078 "thread_get_stats", 00:04:54.078 "framework_monitor_context_switch", 00:04:54.078 "spdk_kill_instance", 00:04:54.078 "log_enable_timestamps", 00:04:54.078 "log_get_flags", 00:04:54.078 "log_clear_flag", 00:04:54.078 "log_set_flag", 00:04:54.078 "log_get_level", 00:04:54.078 "log_set_level", 00:04:54.078 "log_get_print_level", 00:04:54.078 "log_set_print_level", 00:04:54.078 "framework_enable_cpumask_locks", 00:04:54.078 "framework_disable_cpumask_locks", 00:04:54.078 "framework_wait_init", 00:04:54.078 "framework_start_init", 00:04:54.078 "scsi_get_devices", 00:04:54.078 "bdev_get_histogram", 00:04:54.078 "bdev_enable_histogram", 00:04:54.078 "bdev_set_qos_limit", 00:04:54.078 "bdev_set_qd_sampling_period", 00:04:54.078 "bdev_get_bdevs", 00:04:54.078 "bdev_reset_iostat", 00:04:54.078 "bdev_get_iostat", 00:04:54.078 "bdev_examine", 00:04:54.078 "bdev_wait_for_examine", 00:04:54.078 "bdev_set_options", 00:04:54.078 "accel_get_stats", 00:04:54.078 "accel_set_options", 00:04:54.078 "accel_set_driver", 00:04:54.078 "accel_crypto_key_destroy", 00:04:54.078 "accel_crypto_keys_get", 00:04:54.078 "accel_crypto_key_create", 00:04:54.078 "accel_assign_opc", 00:04:54.078 "accel_get_module_info", 00:04:54.078 "accel_get_opc_assignments", 00:04:54.078 "vmd_rescan", 00:04:54.078 "vmd_remove_device", 00:04:54.078 "vmd_enable", 00:04:54.078 "sock_get_default_impl", 00:04:54.078 "sock_set_default_impl", 00:04:54.078 "sock_impl_set_options", 00:04:54.078 "sock_impl_get_options", 00:04:54.078 "iobuf_get_stats", 00:04:54.078 "iobuf_set_options", 00:04:54.078 "keyring_get_keys", 00:04:54.078 "framework_get_pci_devices", 00:04:54.078 "framework_get_config", 00:04:54.078 "framework_get_subsystems", 00:04:54.078 "fsdev_set_opts", 00:04:54.078 "fsdev_get_opts", 00:04:54.078 "trace_get_info", 00:04:54.078 "trace_get_tpoint_group_mask", 00:04:54.078 "trace_disable_tpoint_group", 00:04:54.078 "trace_enable_tpoint_group", 00:04:54.078 "trace_clear_tpoint_mask", 00:04:54.078 "trace_set_tpoint_mask", 00:04:54.078 "notify_get_notifications", 00:04:54.078 "notify_get_types", 00:04:54.078 "spdk_get_version", 00:04:54.078 "rpc_get_methods" 00:04:54.078 ] 00:04:54.078 12:17:01 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:54.078 12:17:01 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:54.078 12:17:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.078 12:17:01 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:54.078 12:17:01 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59793 00:04:54.078 12:17:01 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59793 ']' 00:04:54.078 12:17:01 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59793 00:04:54.078 12:17:01 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:54.078 12:17:01 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.078 12:17:01 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59793 00:04:54.078 killing process with pid 59793 00:04:54.078 12:17:01 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.078 12:17:01 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.078 12:17:01 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59793' 00:04:54.078 12:17:01 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59793 00:04:54.078 12:17:01 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59793 00:04:55.453 00:04:55.453 real 0m2.721s 00:04:55.453 user 0m4.856s 00:04:55.453 sys 0m0.450s 00:04:55.453 12:17:02 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.453 ************************************ 00:04:55.453 12:17:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.453 END TEST spdkcli_tcp 00:04:55.453 ************************************ 00:04:55.453 12:17:02 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:55.453 12:17:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.453 12:17:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.453 12:17:02 -- common/autotest_common.sh@10 -- # set +x 00:04:55.453 ************************************ 00:04:55.453 START TEST dpdk_mem_utility 00:04:55.453 ************************************ 00:04:55.453 12:17:02 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:55.453 * Looking for test storage... 00:04:55.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:55.453 12:17:02 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:55.453 12:17:02 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:55.453 12:17:02 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:55.712 12:17:02 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.713 12:17:02 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:55.713 12:17:02 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.713 12:17:02 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:55.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.713 --rc genhtml_branch_coverage=1 00:04:55.713 --rc genhtml_function_coverage=1 00:04:55.713 --rc genhtml_legend=1 00:04:55.713 --rc geninfo_all_blocks=1 00:04:55.713 --rc geninfo_unexecuted_blocks=1 00:04:55.713 00:04:55.713 ' 00:04:55.713 12:17:02 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:55.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.713 --rc genhtml_branch_coverage=1 00:04:55.713 --rc genhtml_function_coverage=1 00:04:55.713 --rc genhtml_legend=1 00:04:55.713 --rc geninfo_all_blocks=1 00:04:55.713 --rc geninfo_unexecuted_blocks=1 00:04:55.713 00:04:55.713 ' 00:04:55.713 12:17:02 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:55.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.713 --rc genhtml_branch_coverage=1 00:04:55.713 --rc genhtml_function_coverage=1 00:04:55.713 --rc genhtml_legend=1 00:04:55.713 --rc geninfo_all_blocks=1 00:04:55.713 --rc geninfo_unexecuted_blocks=1 00:04:55.713 00:04:55.713 ' 00:04:55.713 12:17:02 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:55.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.713 --rc genhtml_branch_coverage=1 00:04:55.713 --rc genhtml_function_coverage=1 00:04:55.713 --rc genhtml_legend=1 00:04:55.713 --rc geninfo_all_blocks=1 00:04:55.713 --rc geninfo_unexecuted_blocks=1 00:04:55.713 00:04:55.713 ' 00:04:55.713 12:17:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:55.713 12:17:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59904 00:04:55.713 12:17:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59904 00:04:55.713 12:17:02 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59904 ']' 00:04:55.713 12:17:02 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.713 12:17:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.713 12:17:02 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.713 12:17:02 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.713 12:17:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:55.713 12:17:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.713 [2024-12-16 12:17:02.698520] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:55.713 [2024-12-16 12:17:02.698635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59904 ] 00:04:55.972 [2024-12-16 12:17:02.853870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.972 [2024-12-16 12:17:02.932885] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.539 12:17:03 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.539 12:17:03 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:56.539 12:17:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:56.539 12:17:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:56.539 12:17:03 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.539 12:17:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:56.539 { 00:04:56.539 "filename": "/tmp/spdk_mem_dump.txt" 00:04:56.539 } 00:04:56.539 12:17:03 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.539 12:17:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:56.539 DPDK memory size 824.000000 MiB in 1 heap(s) 00:04:56.539 1 heaps totaling size 824.000000 MiB 00:04:56.539 size: 824.000000 MiB heap id: 0 00:04:56.539 end heaps---------- 00:04:56.539 9 mempools totaling size 603.782043 MiB 00:04:56.539 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:56.539 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:56.539 size: 100.555481 MiB name: bdev_io_59904 00:04:56.539 size: 50.003479 MiB name: msgpool_59904 00:04:56.539 size: 36.509338 MiB name: fsdev_io_59904 00:04:56.539 size: 21.763794 MiB name: PDU_Pool 00:04:56.539 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:56.539 size: 4.133484 MiB name: evtpool_59904 00:04:56.539 size: 0.026123 MiB name: Session_Pool 00:04:56.539 end mempools------- 00:04:56.539 6 memzones totaling size 4.142822 MiB 00:04:56.539 size: 1.000366 MiB name: RG_ring_0_59904 00:04:56.539 size: 1.000366 MiB name: RG_ring_1_59904 00:04:56.539 size: 1.000366 MiB name: RG_ring_4_59904 00:04:56.540 size: 1.000366 MiB name: RG_ring_5_59904 00:04:56.540 size: 0.125366 MiB name: RG_ring_2_59904 00:04:56.540 size: 0.015991 MiB name: RG_ring_3_59904 00:04:56.540 end memzones------- 00:04:56.540 12:17:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:56.540 heap id: 0 total size: 824.000000 MiB number of busy elements: 322 number of free elements: 18 00:04:56.540 list of free elements. size: 16.779663 MiB 00:04:56.540 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:56.540 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:56.540 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:56.540 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:56.540 element at address: 0x200019900040 with size: 0.999939 MiB 00:04:56.540 element at address: 0x200019a00000 with size: 0.999084 MiB 00:04:56.540 element at address: 0x200032600000 with size: 0.994324 MiB 00:04:56.540 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:56.540 element at address: 0x200019200000 with size: 0.959656 MiB 00:04:56.540 element at address: 0x200019d00040 with size: 0.936401 MiB 00:04:56.540 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:56.540 element at address: 0x20001b400000 with size: 0.559753 MiB 00:04:56.540 element at address: 0x200000c00000 with size: 0.490173 MiB 00:04:56.540 element at address: 0x200019600000 with size: 0.488464 MiB 00:04:56.540 element at address: 0x200019e00000 with size: 0.485413 MiB 00:04:56.540 element at address: 0x200012c00000 with size: 0.433228 MiB 00:04:56.540 element at address: 0x200028800000 with size: 0.390442 MiB 00:04:56.540 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:56.540 list of standard malloc elements. size: 199.289429 MiB 00:04:56.540 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:56.540 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:56.540 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:56.540 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:56.540 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:04:56.540 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:56.540 element at address: 0x200019deff40 with size: 0.062683 MiB 00:04:56.540 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:56.540 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:56.540 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:04:56.540 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:56.540 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:56.540 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:04:56.540 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:04:56.540 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:04:56.541 element at address: 0x200019affc40 with size: 0.000244 MiB 00:04:56.541 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b48f4c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:04:56.541 element at address: 0x200028863f40 with size: 0.000244 MiB 00:04:56.541 element at address: 0x200028864040 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886af80 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886b080 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886b180 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886b280 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886b380 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886b480 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886b580 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886b680 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886b780 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886b880 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886b980 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886be80 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886c080 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886c180 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886c280 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886c380 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886c480 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886c580 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886c680 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886c780 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886c880 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886c980 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886d080 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886d180 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886d280 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886d380 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886d480 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886d580 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886d680 with size: 0.000244 MiB 00:04:56.541 element at address: 0x20002886d780 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886d880 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886d980 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886da80 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886db80 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886de80 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886df80 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886e080 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886e180 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886e280 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886e380 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886e480 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886e580 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886e680 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886e780 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886e880 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886e980 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886f080 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886f180 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886f280 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886f380 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886f480 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886f580 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886f680 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886f780 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886f880 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886f980 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:04:56.542 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:04:56.542 list of memzone associated elements. size: 607.930908 MiB 00:04:56.542 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:04:56.542 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:56.542 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:04:56.542 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:56.542 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:04:56.542 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59904_0 00:04:56.542 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:56.542 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59904_0 00:04:56.542 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:56.542 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59904_0 00:04:56.542 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:04:56.542 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:56.542 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:04:56.542 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:56.542 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:56.542 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59904_0 00:04:56.542 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:56.542 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59904 00:04:56.542 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:56.542 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59904 00:04:56.542 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:04:56.542 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:56.542 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:04:56.542 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:56.542 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:56.542 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:56.542 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:04:56.542 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:56.542 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:56.542 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59904 00:04:56.542 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:56.542 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59904 00:04:56.542 element at address: 0x200019affd40 with size: 1.000549 MiB 00:04:56.542 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59904 00:04:56.542 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:04:56.542 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59904 00:04:56.542 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:56.542 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59904 00:04:56.542 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:56.542 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59904 00:04:56.542 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:04:56.542 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:56.542 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:04:56.542 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:56.542 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:04:56.542 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:56.542 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:56.542 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59904 00:04:56.542 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:56.542 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59904 00:04:56.542 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:04:56.542 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:56.542 element at address: 0x200028864140 with size: 0.023804 MiB 00:04:56.542 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:56.542 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:56.542 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59904 00:04:56.542 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:04:56.542 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:56.542 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:56.542 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59904 00:04:56.542 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:56.542 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59904 00:04:56.542 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:56.542 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59904 00:04:56.542 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:04:56.542 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:56.542 12:17:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:56.542 12:17:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59904 00:04:56.542 12:17:03 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59904 ']' 00:04:56.542 12:17:03 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59904 00:04:56.542 12:17:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:56.542 12:17:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.542 12:17:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59904 00:04:56.542 12:17:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.542 killing process with pid 59904 00:04:56.542 12:17:03 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.542 12:17:03 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59904' 00:04:56.542 12:17:03 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59904 00:04:56.542 12:17:03 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59904 00:04:57.917 00:04:57.917 real 0m2.287s 00:04:57.917 user 0m2.273s 00:04:57.917 sys 0m0.379s 00:04:57.917 12:17:04 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.917 ************************************ 00:04:57.917 END TEST dpdk_mem_utility 00:04:57.917 ************************************ 00:04:57.917 12:17:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:57.917 12:17:04 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:57.917 12:17:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.917 12:17:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.917 12:17:04 -- common/autotest_common.sh@10 -- # set +x 00:04:57.917 ************************************ 00:04:57.917 START TEST event 00:04:57.917 ************************************ 00:04:57.917 12:17:04 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:57.917 * Looking for test storage... 00:04:57.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:57.917 12:17:04 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:57.917 12:17:04 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:57.917 12:17:04 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:57.917 12:17:04 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:57.917 12:17:04 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.917 12:17:04 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.917 12:17:04 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.917 12:17:04 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.917 12:17:04 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.917 12:17:04 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.917 12:17:04 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.917 12:17:04 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.917 12:17:04 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.917 12:17:04 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.917 12:17:04 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.917 12:17:04 event -- scripts/common.sh@344 -- # case "$op" in 00:04:57.917 12:17:04 event -- scripts/common.sh@345 -- # : 1 00:04:57.917 12:17:04 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.917 12:17:04 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.917 12:17:04 event -- scripts/common.sh@365 -- # decimal 1 00:04:57.917 12:17:04 event -- scripts/common.sh@353 -- # local d=1 00:04:57.917 12:17:04 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.917 12:17:04 event -- scripts/common.sh@355 -- # echo 1 00:04:57.917 12:17:04 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.917 12:17:04 event -- scripts/common.sh@366 -- # decimal 2 00:04:57.917 12:17:04 event -- scripts/common.sh@353 -- # local d=2 00:04:57.917 12:17:04 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.917 12:17:04 event -- scripts/common.sh@355 -- # echo 2 00:04:57.917 12:17:04 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.917 12:17:04 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.917 12:17:04 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.917 12:17:04 event -- scripts/common.sh@368 -- # return 0 00:04:57.917 12:17:04 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.917 12:17:04 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:57.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.917 --rc genhtml_branch_coverage=1 00:04:57.917 --rc genhtml_function_coverage=1 00:04:57.917 --rc genhtml_legend=1 00:04:57.917 --rc geninfo_all_blocks=1 00:04:57.917 --rc geninfo_unexecuted_blocks=1 00:04:57.917 00:04:57.917 ' 00:04:57.917 12:17:04 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:57.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.917 --rc genhtml_branch_coverage=1 00:04:57.917 --rc genhtml_function_coverage=1 00:04:57.917 --rc genhtml_legend=1 00:04:57.917 --rc geninfo_all_blocks=1 00:04:57.917 --rc geninfo_unexecuted_blocks=1 00:04:57.917 00:04:57.917 ' 00:04:57.917 12:17:04 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:57.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.917 --rc genhtml_branch_coverage=1 00:04:57.917 --rc genhtml_function_coverage=1 00:04:57.917 --rc genhtml_legend=1 00:04:57.917 --rc geninfo_all_blocks=1 00:04:57.917 --rc geninfo_unexecuted_blocks=1 00:04:57.917 00:04:57.917 ' 00:04:57.917 12:17:04 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:57.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.917 --rc genhtml_branch_coverage=1 00:04:57.917 --rc genhtml_function_coverage=1 00:04:57.917 --rc genhtml_legend=1 00:04:57.917 --rc geninfo_all_blocks=1 00:04:57.917 --rc geninfo_unexecuted_blocks=1 00:04:57.917 00:04:57.917 ' 00:04:57.917 12:17:04 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:57.918 12:17:04 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:57.918 12:17:04 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:57.918 12:17:04 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:57.918 12:17:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.918 12:17:04 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.918 ************************************ 00:04:57.918 START TEST event_perf 00:04:57.918 ************************************ 00:04:57.918 12:17:04 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:57.918 Running I/O for 1 seconds...[2024-12-16 12:17:04.995066] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:57.918 [2024-12-16 12:17:04.995181] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59990 ] 00:04:58.175 [2024-12-16 12:17:05.145389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:58.175 [2024-12-16 12:17:05.224511] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.175 [2024-12-16 12:17:05.224800] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.175 [2024-12-16 12:17:05.224977] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.175 Running I/O for 1 seconds...[2024-12-16 12:17:05.225001] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:04:59.548 00:04:59.548 lcore 0: 198787 00:04:59.548 lcore 1: 198790 00:04:59.548 lcore 2: 198791 00:04:59.548 lcore 3: 198787 00:04:59.548 done. 00:04:59.548 00:04:59.548 real 0m1.387s 00:04:59.548 user 0m4.196s 00:04:59.548 sys 0m0.075s 00:04:59.548 12:17:06 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.548 ************************************ 00:04:59.548 END TEST event_perf 00:04:59.548 ************************************ 00:04:59.548 12:17:06 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:59.548 12:17:06 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:59.548 12:17:06 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:59.548 12:17:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.548 12:17:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.548 ************************************ 00:04:59.548 START TEST event_reactor 00:04:59.548 ************************************ 00:04:59.548 12:17:06 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:59.548 [2024-12-16 12:17:06.427103] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:59.548 [2024-12-16 12:17:06.427193] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60029 ] 00:04:59.548 [2024-12-16 12:17:06.574285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.548 [2024-12-16 12:17:06.649475] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.923 test_start 00:05:00.923 oneshot 00:05:00.923 tick 100 00:05:00.923 tick 100 00:05:00.923 tick 250 00:05:00.923 tick 100 00:05:00.923 tick 100 00:05:00.923 tick 250 00:05:00.923 tick 100 00:05:00.923 tick 500 00:05:00.923 tick 100 00:05:00.923 tick 100 00:05:00.923 tick 250 00:05:00.923 tick 100 00:05:00.923 tick 100 00:05:00.923 test_end 00:05:00.923 00:05:00.923 real 0m1.361s 00:05:00.923 user 0m1.202s 00:05:00.923 sys 0m0.052s 00:05:00.923 12:17:07 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.923 ************************************ 00:05:00.923 END TEST event_reactor 00:05:00.923 ************************************ 00:05:00.923 12:17:07 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:00.923 12:17:07 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:00.923 12:17:07 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:00.923 12:17:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.923 12:17:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.923 ************************************ 00:05:00.923 START TEST event_reactor_perf 00:05:00.923 ************************************ 00:05:00.923 12:17:07 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:00.923 [2024-12-16 12:17:07.836466] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:00.923 [2024-12-16 12:17:07.836573] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60066 ] 00:05:00.923 [2024-12-16 12:17:07.997466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.184 [2024-12-16 12:17:08.091955] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.569 test_start 00:05:02.569 test_end 00:05:02.569 Performance: 315945 events per second 00:05:02.569 00:05:02.569 real 0m1.453s 00:05:02.569 user 0m1.273s 00:05:02.569 sys 0m0.071s 00:05:02.569 12:17:09 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.569 ************************************ 00:05:02.570 END TEST event_reactor_perf 00:05:02.570 ************************************ 00:05:02.570 12:17:09 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:02.570 12:17:09 event -- event/event.sh@49 -- # uname -s 00:05:02.570 12:17:09 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:02.570 12:17:09 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:02.570 12:17:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.570 12:17:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.570 12:17:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.570 ************************************ 00:05:02.570 START TEST event_scheduler 00:05:02.570 ************************************ 00:05:02.570 12:17:09 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:02.570 * Looking for test storage... 00:05:02.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:02.570 12:17:09 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:02.570 12:17:09 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:02.570 12:17:09 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:02.570 12:17:09 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.570 12:17:09 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:02.570 12:17:09 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.570 12:17:09 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:02.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.570 --rc genhtml_branch_coverage=1 00:05:02.570 --rc genhtml_function_coverage=1 00:05:02.570 --rc genhtml_legend=1 00:05:02.570 --rc geninfo_all_blocks=1 00:05:02.570 --rc geninfo_unexecuted_blocks=1 00:05:02.570 00:05:02.570 ' 00:05:02.570 12:17:09 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:02.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.570 --rc genhtml_branch_coverage=1 00:05:02.570 --rc genhtml_function_coverage=1 00:05:02.570 --rc genhtml_legend=1 00:05:02.570 --rc geninfo_all_blocks=1 00:05:02.570 --rc geninfo_unexecuted_blocks=1 00:05:02.570 00:05:02.570 ' 00:05:02.570 12:17:09 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:02.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.570 --rc genhtml_branch_coverage=1 00:05:02.570 --rc genhtml_function_coverage=1 00:05:02.570 --rc genhtml_legend=1 00:05:02.570 --rc geninfo_all_blocks=1 00:05:02.570 --rc geninfo_unexecuted_blocks=1 00:05:02.570 00:05:02.570 ' 00:05:02.570 12:17:09 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:02.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.570 --rc genhtml_branch_coverage=1 00:05:02.570 --rc genhtml_function_coverage=1 00:05:02.570 --rc genhtml_legend=1 00:05:02.570 --rc geninfo_all_blocks=1 00:05:02.570 --rc geninfo_unexecuted_blocks=1 00:05:02.570 00:05:02.570 ' 00:05:02.570 12:17:09 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:02.570 12:17:09 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60136 00:05:02.570 12:17:09 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.570 12:17:09 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60136 00:05:02.570 12:17:09 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60136 ']' 00:05:02.570 12:17:09 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.570 12:17:09 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:02.570 12:17:09 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.570 12:17:09 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.570 12:17:09 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.570 12:17:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:02.570 [2024-12-16 12:17:09.519769] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:02.570 [2024-12-16 12:17:09.519890] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60136 ] 00:05:02.570 [2024-12-16 12:17:09.672498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:02.830 [2024-12-16 12:17:09.774199] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.830 [2024-12-16 12:17:09.774255] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.830 [2024-12-16 12:17:09.774437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.830 [2024-12-16 12:17:09.774513] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.403 12:17:10 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.403 12:17:10 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:03.403 12:17:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:03.403 12:17:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.403 12:17:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.403 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.403 POWER: Cannot set governor of lcore 0 to userspace 00:05:03.403 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.403 POWER: Cannot set governor of lcore 0 to performance 00:05:03.403 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.403 POWER: Cannot set governor of lcore 0 to userspace 00:05:03.403 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.403 POWER: Cannot set governor of lcore 0 to userspace 00:05:03.403 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:03.403 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:03.403 POWER: Unable to set Power Management Environment for lcore 0 00:05:03.403 [2024-12-16 12:17:10.359902] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:03.403 [2024-12-16 12:17:10.359923] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:03.403 [2024-12-16 12:17:10.359932] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:03.403 [2024-12-16 12:17:10.359948] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:03.403 [2024-12-16 12:17:10.359956] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:03.403 [2024-12-16 12:17:10.359965] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:03.403 12:17:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.403 12:17:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:03.403 12:17:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.403 12:17:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.664 [2024-12-16 12:17:10.586406] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:03.664 12:17:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.664 12:17:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:03.664 12:17:10 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.664 12:17:10 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.664 12:17:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.664 ************************************ 00:05:03.664 START TEST scheduler_create_thread 00:05:03.664 ************************************ 00:05:03.664 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:03.664 12:17:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:03.664 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.664 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.664 2 00:05:03.664 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.664 12:17:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:03.664 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.665 3 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.665 4 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.665 5 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.665 6 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.665 7 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.665 8 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.665 9 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.665 10 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.665 12:17:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.235 12:17:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.235 00:05:04.235 real 0m0.592s 00:05:04.235 user 0m0.015s 00:05:04.235 sys 0m0.004s 00:05:04.235 12:17:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.235 12:17:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.235 ************************************ 00:05:04.235 END TEST scheduler_create_thread 00:05:04.235 ************************************ 00:05:04.235 12:17:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:04.235 12:17:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60136 00:05:04.235 12:17:11 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60136 ']' 00:05:04.235 12:17:11 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60136 00:05:04.235 12:17:11 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:04.235 12:17:11 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.235 12:17:11 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60136 00:05:04.235 killing process with pid 60136 00:05:04.235 12:17:11 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:04.235 12:17:11 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:04.235 12:17:11 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60136' 00:05:04.235 12:17:11 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60136 00:05:04.235 12:17:11 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60136 00:05:04.802 [2024-12-16 12:17:11.675180] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:05.370 00:05:05.370 real 0m2.932s 00:05:05.370 user 0m5.570s 00:05:05.370 sys 0m0.333s 00:05:05.370 12:17:12 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.370 12:17:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:05.370 ************************************ 00:05:05.370 END TEST event_scheduler 00:05:05.370 ************************************ 00:05:05.370 12:17:12 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:05.370 12:17:12 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:05.370 12:17:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.370 12:17:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.370 12:17:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.370 ************************************ 00:05:05.370 START TEST app_repeat 00:05:05.370 ************************************ 00:05:05.370 12:17:12 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:05.370 12:17:12 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.370 12:17:12 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.370 12:17:12 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:05.370 12:17:12 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.370 12:17:12 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:05.370 12:17:12 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:05.370 12:17:12 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:05.370 12:17:12 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60215 00:05:05.370 12:17:12 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.370 Process app_repeat pid: 60215 00:05:05.370 12:17:12 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60215' 00:05:05.370 spdk_app_start Round 0 00:05:05.370 12:17:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:05.370 12:17:12 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:05.370 12:17:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:05.370 12:17:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60215 /var/tmp/spdk-nbd.sock 00:05:05.370 12:17:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60215 ']' 00:05:05.370 12:17:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.370 12:17:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.370 12:17:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.370 12:17:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.370 12:17:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.370 [2024-12-16 12:17:12.354218] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:05.370 [2024-12-16 12:17:12.354321] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60215 ] 00:05:05.631 [2024-12-16 12:17:12.514418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.631 [2024-12-16 12:17:12.612739] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.631 [2024-12-16 12:17:12.612819] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.203 12:17:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.203 12:17:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:06.203 12:17:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.463 Malloc0 00:05:06.463 12:17:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.724 Malloc1 00:05:06.724 12:17:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.724 12:17:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.724 12:17:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.724 12:17:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:06.724 12:17:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.724 12:17:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:06.724 12:17:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.724 12:17:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.724 12:17:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.724 12:17:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:06.724 12:17:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.724 12:17:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:06.724 12:17:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:06.724 12:17:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:06.724 12:17:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.724 12:17:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:06.985 /dev/nbd0 00:05:06.985 12:17:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:06.986 12:17:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:06.986 12:17:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:06.986 12:17:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:06.986 12:17:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:06.986 12:17:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:06.986 12:17:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:06.986 12:17:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:06.986 12:17:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:06.986 12:17:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:06.986 12:17:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.986 1+0 records in 00:05:06.986 1+0 records out 00:05:06.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395714 s, 10.4 MB/s 00:05:06.986 12:17:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.986 12:17:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:06.986 12:17:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.986 12:17:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:06.986 12:17:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:06.986 12:17:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.986 12:17:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.986 12:17:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:07.247 /dev/nbd1 00:05:07.247 12:17:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:07.247 12:17:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:07.247 12:17:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:07.247 12:17:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:07.247 12:17:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:07.247 12:17:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:07.247 12:17:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:07.247 12:17:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:07.247 12:17:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:07.247 12:17:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:07.247 12:17:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.247 1+0 records in 00:05:07.247 1+0 records out 00:05:07.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272969 s, 15.0 MB/s 00:05:07.247 12:17:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.247 12:17:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:07.247 12:17:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.247 12:17:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:07.247 12:17:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:07.247 12:17:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.247 12:17:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.247 12:17:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.247 12:17:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.247 12:17:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:07.528 { 00:05:07.528 "nbd_device": "/dev/nbd0", 00:05:07.528 "bdev_name": "Malloc0" 00:05:07.528 }, 00:05:07.528 { 00:05:07.528 "nbd_device": "/dev/nbd1", 00:05:07.528 "bdev_name": "Malloc1" 00:05:07.528 } 00:05:07.528 ]' 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:07.528 { 00:05:07.528 "nbd_device": "/dev/nbd0", 00:05:07.528 "bdev_name": "Malloc0" 00:05:07.528 }, 00:05:07.528 { 00:05:07.528 "nbd_device": "/dev/nbd1", 00:05:07.528 "bdev_name": "Malloc1" 00:05:07.528 } 00:05:07.528 ]' 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:07.528 /dev/nbd1' 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:07.528 /dev/nbd1' 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:07.528 256+0 records in 00:05:07.528 256+0 records out 00:05:07.528 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00414895 s, 253 MB/s 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:07.528 256+0 records in 00:05:07.528 256+0 records out 00:05:07.528 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279061 s, 37.6 MB/s 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:07.528 256+0 records in 00:05:07.528 256+0 records out 00:05:07.528 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184694 s, 56.8 MB/s 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:07.528 12:17:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.529 12:17:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:07.529 12:17:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.529 12:17:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:07.529 12:17:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.529 12:17:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.529 12:17:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:07.529 12:17:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:07.529 12:17:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.529 12:17:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:07.790 12:17:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:07.790 12:17:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:07.790 12:17:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:07.790 12:17:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.790 12:17:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.790 12:17:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:07.790 12:17:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.790 12:17:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.790 12:17:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.790 12:17:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:07.790 12:17:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:07.790 12:17:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:07.790 12:17:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:07.790 12:17:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.790 12:17:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.790 12:17:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:07.790 12:17:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.790 12:17:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.790 12:17:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.790 12:17:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.790 12:17:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.049 12:17:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:08.049 12:17:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:08.049 12:17:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.049 12:17:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:08.049 12:17:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:08.049 12:17:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.049 12:17:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:08.049 12:17:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:08.049 12:17:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:08.049 12:17:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:08.049 12:17:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:08.049 12:17:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:08.049 12:17:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:08.614 12:17:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:08.872 [2024-12-16 12:17:15.962943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.133 [2024-12-16 12:17:16.032521] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.133 [2024-12-16 12:17:16.032773] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.133 [2024-12-16 12:17:16.128246] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:09.133 [2024-12-16 12:17:16.128307] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:11.663 spdk_app_start Round 1 00:05:11.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.663 12:17:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:11.663 12:17:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:11.663 12:17:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60215 /var/tmp/spdk-nbd.sock 00:05:11.663 12:17:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60215 ']' 00:05:11.663 12:17:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.663 12:17:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.663 12:17:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.663 12:17:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.663 12:17:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.663 12:17:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.663 12:17:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:11.663 12:17:18 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.920 Malloc0 00:05:11.920 12:17:18 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.182 Malloc1 00:05:12.182 12:17:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.182 12:17:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.182 12:17:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.182 12:17:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:12.183 12:17:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.183 12:17:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:12.183 12:17:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.183 12:17:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.183 12:17:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.183 12:17:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:12.183 12:17:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.183 12:17:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:12.183 12:17:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:12.183 12:17:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:12.183 12:17:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.183 12:17:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:12.183 /dev/nbd0 00:05:12.441 12:17:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:12.441 12:17:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:12.441 12:17:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:12.441 12:17:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:12.441 12:17:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:12.441 12:17:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:12.441 12:17:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:12.441 12:17:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:12.441 12:17:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:12.441 12:17:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:12.441 12:17:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.441 1+0 records in 00:05:12.441 1+0 records out 00:05:12.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286843 s, 14.3 MB/s 00:05:12.441 12:17:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.442 12:17:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:12.442 12:17:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.442 12:17:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:12.442 12:17:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:12.442 12:17:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.442 12:17:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.442 12:17:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.442 /dev/nbd1 00:05:12.442 12:17:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:12.442 12:17:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:12.442 12:17:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:12.442 12:17:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:12.442 12:17:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:12.442 12:17:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:12.442 12:17:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:12.442 12:17:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:12.442 12:17:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:12.442 12:17:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:12.442 12:17:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.442 1+0 records in 00:05:12.442 1+0 records out 00:05:12.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256736 s, 16.0 MB/s 00:05:12.442 12:17:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.442 12:17:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:12.442 12:17:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.442 12:17:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:12.442 12:17:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:12.442 12:17:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.442 12:17:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.442 12:17:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.442 12:17:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.442 12:17:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.701 12:17:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.701 { 00:05:12.701 "nbd_device": "/dev/nbd0", 00:05:12.701 "bdev_name": "Malloc0" 00:05:12.701 }, 00:05:12.701 { 00:05:12.701 "nbd_device": "/dev/nbd1", 00:05:12.701 "bdev_name": "Malloc1" 00:05:12.701 } 00:05:12.701 ]' 00:05:12.701 12:17:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.701 { 00:05:12.701 "nbd_device": "/dev/nbd0", 00:05:12.701 "bdev_name": "Malloc0" 00:05:12.701 }, 00:05:12.701 { 00:05:12.701 "nbd_device": "/dev/nbd1", 00:05:12.701 "bdev_name": "Malloc1" 00:05:12.701 } 00:05:12.701 ]' 00:05:12.701 12:17:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.701 12:17:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.701 /dev/nbd1' 00:05:12.701 12:17:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.701 12:17:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.701 /dev/nbd1' 00:05:12.701 12:17:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.701 12:17:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.701 12:17:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.701 12:17:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.701 12:17:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.701 12:17:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.701 12:17:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.701 12:17:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.701 12:17:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.701 12:17:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.701 12:17:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.701 256+0 records in 00:05:12.701 256+0 records out 00:05:12.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00975425 s, 107 MB/s 00:05:12.701 12:17:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.701 12:17:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.701 256+0 records in 00:05:12.701 256+0 records out 00:05:12.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142804 s, 73.4 MB/s 00:05:12.701 12:17:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.960 12:17:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.960 256+0 records in 00:05:12.960 256+0 records out 00:05:12.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193096 s, 54.3 MB/s 00:05:12.960 12:17:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.960 12:17:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.960 12:17:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.960 12:17:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.960 12:17:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.960 12:17:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.960 12:17:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.960 12:17:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.960 12:17:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.960 12:17:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.960 12:17:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.960 12:17:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.960 12:17:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.960 12:17:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.960 12:17:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.960 12:17:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.960 12:17:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:12.960 12:17:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.960 12:17:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:12.960 12:17:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:12.960 12:17:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:12.960 12:17:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:12.960 12:17:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.960 12:17:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.960 12:17:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:12.960 12:17:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.960 12:17:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.960 12:17:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.960 12:17:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.218 12:17:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.218 12:17:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.218 12:17:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.218 12:17:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.218 12:17:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.218 12:17:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.218 12:17:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.218 12:17:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.218 12:17:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.218 12:17:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.218 12:17:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.476 12:17:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.476 12:17:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.476 12:17:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.476 12:17:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.477 12:17:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.477 12:17:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.477 12:17:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.477 12:17:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.477 12:17:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.477 12:17:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.477 12:17:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.477 12:17:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.477 12:17:20 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.739 12:17:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:14.307 [2024-12-16 12:17:21.331202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.307 [2024-12-16 12:17:21.397696] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.307 [2024-12-16 12:17:21.397900] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.567 [2024-12-16 12:17:21.493440] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:14.567 [2024-12-16 12:17:21.493488] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:17.099 spdk_app_start Round 2 00:05:17.099 12:17:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:17.099 12:17:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:17.099 12:17:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60215 /var/tmp/spdk-nbd.sock 00:05:17.099 12:17:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60215 ']' 00:05:17.099 12:17:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.099 12:17:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.099 12:17:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.099 12:17:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.099 12:17:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.099 12:17:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.099 12:17:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:17.099 12:17:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.357 Malloc0 00:05:17.357 12:17:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.357 Malloc1 00:05:17.357 12:17:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.357 12:17:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.357 12:17:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.357 12:17:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.357 12:17:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.357 12:17:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.357 12:17:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.357 12:17:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.357 12:17:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.357 12:17:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.357 12:17:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.357 12:17:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.357 12:17:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.357 12:17:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.357 12:17:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.357 12:17:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.616 /dev/nbd0 00:05:17.616 12:17:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.616 12:17:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.616 12:17:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:17.616 12:17:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:17.616 12:17:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:17.616 12:17:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:17.616 12:17:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:17.616 12:17:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:17.616 12:17:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:17.616 12:17:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:17.616 12:17:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.616 1+0 records in 00:05:17.616 1+0 records out 00:05:17.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319246 s, 12.8 MB/s 00:05:17.616 12:17:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.616 12:17:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:17.616 12:17:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.616 12:17:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:17.616 12:17:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:17.616 12:17:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.616 12:17:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.616 12:17:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.875 /dev/nbd1 00:05:17.875 12:17:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.875 12:17:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.875 12:17:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:17.875 12:17:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:17.875 12:17:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:17.875 12:17:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:17.875 12:17:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:17.875 12:17:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:17.875 12:17:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:17.875 12:17:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:17.875 12:17:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.875 1+0 records in 00:05:17.875 1+0 records out 00:05:17.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230779 s, 17.7 MB/s 00:05:17.875 12:17:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.875 12:17:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:17.875 12:17:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.875 12:17:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:17.875 12:17:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:17.875 12:17:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.875 12:17:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.875 12:17:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.875 12:17:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.875 12:17:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.138 12:17:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.138 { 00:05:18.138 "nbd_device": "/dev/nbd0", 00:05:18.138 "bdev_name": "Malloc0" 00:05:18.138 }, 00:05:18.138 { 00:05:18.138 "nbd_device": "/dev/nbd1", 00:05:18.138 "bdev_name": "Malloc1" 00:05:18.138 } 00:05:18.138 ]' 00:05:18.138 12:17:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.138 { 00:05:18.138 "nbd_device": "/dev/nbd0", 00:05:18.138 "bdev_name": "Malloc0" 00:05:18.138 }, 00:05:18.138 { 00:05:18.138 "nbd_device": "/dev/nbd1", 00:05:18.138 "bdev_name": "Malloc1" 00:05:18.138 } 00:05:18.138 ]' 00:05:18.138 12:17:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.138 12:17:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.138 /dev/nbd1' 00:05:18.138 12:17:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.138 /dev/nbd1' 00:05:18.138 12:17:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.139 256+0 records in 00:05:18.139 256+0 records out 00:05:18.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00537952 s, 195 MB/s 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.139 256+0 records in 00:05:18.139 256+0 records out 00:05:18.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012996 s, 80.7 MB/s 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.139 256+0 records in 00:05:18.139 256+0 records out 00:05:18.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171488 s, 61.1 MB/s 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.139 12:17:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.401 12:17:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.401 12:17:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.401 12:17:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.401 12:17:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.401 12:17:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.401 12:17:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.401 12:17:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.401 12:17:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.401 12:17:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.401 12:17:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.659 12:17:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.659 12:17:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.659 12:17:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.659 12:17:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.659 12:17:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.659 12:17:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.659 12:17:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.659 12:17:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.659 12:17:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.659 12:17:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.659 12:17:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.916 12:17:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.916 12:17:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.916 12:17:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.916 12:17:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.916 12:17:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.916 12:17:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.917 12:17:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:18.917 12:17:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.917 12:17:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.917 12:17:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.917 12:17:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.917 12:17:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.917 12:17:25 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.175 12:17:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:19.742 [2024-12-16 12:17:26.676678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.742 [2024-12-16 12:17:26.744047] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.742 [2024-12-16 12:17:26.744050] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.742 [2024-12-16 12:17:26.844940] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.742 [2024-12-16 12:17:26.844978] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.273 12:17:29 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60215 /var/tmp/spdk-nbd.sock 00:05:22.274 12:17:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60215 ']' 00:05:22.274 12:17:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.274 12:17:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.274 12:17:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.274 12:17:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.274 12:17:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.274 12:17:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.274 12:17:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:22.274 12:17:29 event.app_repeat -- event/event.sh@39 -- # killprocess 60215 00:05:22.274 12:17:29 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60215 ']' 00:05:22.274 12:17:29 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60215 00:05:22.274 12:17:29 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:22.274 12:17:29 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.274 12:17:29 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60215 00:05:22.531 killing process with pid 60215 00:05:22.531 12:17:29 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.531 12:17:29 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.531 12:17:29 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60215' 00:05:22.531 12:17:29 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60215 00:05:22.531 12:17:29 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60215 00:05:22.792 spdk_app_start is called in Round 0. 00:05:22.792 Shutdown signal received, stop current app iteration 00:05:22.792 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:22.792 spdk_app_start is called in Round 1. 00:05:22.792 Shutdown signal received, stop current app iteration 00:05:22.792 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:22.792 spdk_app_start is called in Round 2. 00:05:22.792 Shutdown signal received, stop current app iteration 00:05:22.792 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:22.792 spdk_app_start is called in Round 3. 00:05:22.792 Shutdown signal received, stop current app iteration 00:05:22.792 12:17:29 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:22.792 12:17:29 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:22.792 00:05:22.792 real 0m17.568s 00:05:22.792 user 0m38.584s 00:05:22.792 sys 0m1.982s 00:05:22.792 12:17:29 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.792 12:17:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.792 ************************************ 00:05:22.792 END TEST app_repeat 00:05:22.792 ************************************ 00:05:23.052 12:17:29 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:23.052 12:17:29 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:23.052 12:17:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.052 12:17:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.052 12:17:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.052 ************************************ 00:05:23.052 START TEST cpu_locks 00:05:23.052 ************************************ 00:05:23.052 12:17:29 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:23.052 * Looking for test storage... 00:05:23.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:23.052 12:17:29 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.053 12:17:29 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.053 12:17:29 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.053 12:17:30 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.053 12:17:30 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:23.053 12:17:30 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.053 12:17:30 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.053 --rc genhtml_branch_coverage=1 00:05:23.053 --rc genhtml_function_coverage=1 00:05:23.053 --rc genhtml_legend=1 00:05:23.053 --rc geninfo_all_blocks=1 00:05:23.053 --rc geninfo_unexecuted_blocks=1 00:05:23.053 00:05:23.053 ' 00:05:23.053 12:17:30 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.053 --rc genhtml_branch_coverage=1 00:05:23.053 --rc genhtml_function_coverage=1 00:05:23.053 --rc genhtml_legend=1 00:05:23.053 --rc geninfo_all_blocks=1 00:05:23.053 --rc geninfo_unexecuted_blocks=1 00:05:23.053 00:05:23.053 ' 00:05:23.053 12:17:30 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.053 --rc genhtml_branch_coverage=1 00:05:23.053 --rc genhtml_function_coverage=1 00:05:23.053 --rc genhtml_legend=1 00:05:23.053 --rc geninfo_all_blocks=1 00:05:23.053 --rc geninfo_unexecuted_blocks=1 00:05:23.053 00:05:23.053 ' 00:05:23.053 12:17:30 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.053 --rc genhtml_branch_coverage=1 00:05:23.053 --rc genhtml_function_coverage=1 00:05:23.053 --rc genhtml_legend=1 00:05:23.053 --rc geninfo_all_blocks=1 00:05:23.053 --rc geninfo_unexecuted_blocks=1 00:05:23.053 00:05:23.053 ' 00:05:23.053 12:17:30 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:23.053 12:17:30 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:23.053 12:17:30 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:23.053 12:17:30 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:23.053 12:17:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.053 12:17:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.053 12:17:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.053 ************************************ 00:05:23.053 START TEST default_locks 00:05:23.053 ************************************ 00:05:23.053 12:17:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:23.053 12:17:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60651 00:05:23.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.053 12:17:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60651 00:05:23.053 12:17:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.053 12:17:30 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60651 ']' 00:05:23.053 12:17:30 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.053 12:17:30 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.053 12:17:30 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.053 12:17:30 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.053 12:17:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.312 [2024-12-16 12:17:30.163522] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:23.312 [2024-12-16 12:17:30.163630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60651 ] 00:05:23.312 [2024-12-16 12:17:30.321143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.312 [2024-12-16 12:17:30.396616] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.246 12:17:30 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.246 12:17:30 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:24.246 12:17:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60651 00:05:24.246 12:17:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60651 00:05:24.246 12:17:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.246 12:17:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60651 00:05:24.246 12:17:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60651 ']' 00:05:24.246 12:17:31 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60651 00:05:24.246 12:17:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:24.247 12:17:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.247 12:17:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60651 00:05:24.247 killing process with pid 60651 00:05:24.247 12:17:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.247 12:17:31 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.247 12:17:31 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60651' 00:05:24.247 12:17:31 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60651 00:05:24.247 12:17:31 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60651 00:05:25.620 12:17:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60651 00:05:25.620 12:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:25.620 12:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60651 00:05:25.620 12:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:25.620 12:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:25.620 12:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:25.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.620 12:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:25.620 12:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60651 00:05:25.620 12:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60651 ']' 00:05:25.620 12:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.620 12:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.620 12:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.620 12:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.620 12:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.620 ERROR: process (pid: 60651) is no longer running 00:05:25.620 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60651) - No such process 00:05:25.620 12:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.620 12:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:25.621 12:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:25.621 12:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:25.621 12:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:25.621 12:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:25.621 12:17:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:25.621 12:17:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:25.621 12:17:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:25.621 12:17:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:25.621 00:05:25.621 real 0m2.388s 00:05:25.621 user 0m2.402s 00:05:25.621 sys 0m0.460s 00:05:25.621 12:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.621 12:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.621 ************************************ 00:05:25.621 END TEST default_locks 00:05:25.621 ************************************ 00:05:25.621 12:17:32 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:25.621 12:17:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.621 12:17:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.621 12:17:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.621 ************************************ 00:05:25.621 START TEST default_locks_via_rpc 00:05:25.621 ************************************ 00:05:25.621 12:17:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:25.621 12:17:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60704 00:05:25.621 12:17:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60704 00:05:25.621 12:17:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60704 ']' 00:05:25.621 12:17:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.621 12:17:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.621 12:17:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.621 12:17:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.621 12:17:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.621 12:17:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.621 [2024-12-16 12:17:32.581885] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:25.621 [2024-12-16 12:17:32.581970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60704 ] 00:05:25.879 [2024-12-16 12:17:32.737670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.879 [2024-12-16 12:17:32.830766] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.445 12:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.445 12:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:26.445 12:17:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:26.445 12:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.445 12:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.445 12:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.445 12:17:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:26.445 12:17:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:26.445 12:17:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:26.445 12:17:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:26.445 12:17:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:26.445 12:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.445 12:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.445 12:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.445 12:17:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60704 00:05:26.445 12:17:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.445 12:17:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60704 00:05:26.703 12:17:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60704 00:05:26.703 12:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60704 ']' 00:05:26.703 12:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60704 00:05:26.703 12:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:26.703 12:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.703 12:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60704 00:05:26.703 killing process with pid 60704 00:05:26.703 12:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.703 12:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.703 12:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60704' 00:05:26.703 12:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60704 00:05:26.703 12:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60704 00:05:28.089 ************************************ 00:05:28.089 00:05:28.089 real 0m2.434s 00:05:28.089 user 0m2.448s 00:05:28.089 sys 0m0.419s 00:05:28.090 12:17:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.090 12:17:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.090 END TEST default_locks_via_rpc 00:05:28.090 ************************************ 00:05:28.090 12:17:34 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:28.090 12:17:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.090 12:17:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.090 12:17:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.090 ************************************ 00:05:28.090 START TEST non_locking_app_on_locked_coremask 00:05:28.090 ************************************ 00:05:28.090 12:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:28.090 12:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60757 00:05:28.090 12:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60757 /var/tmp/spdk.sock 00:05:28.090 12:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.090 12:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60757 ']' 00:05:28.090 12:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.090 12:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.090 12:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.090 12:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.090 12:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.090 [2024-12-16 12:17:35.068472] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:28.090 [2024-12-16 12:17:35.068583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60757 ] 00:05:28.346 [2024-12-16 12:17:35.213414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.346 [2024-12-16 12:17:35.288844] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.914 12:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.914 12:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:28.914 12:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:28.914 12:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60772 00:05:28.914 12:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60772 /var/tmp/spdk2.sock 00:05:28.914 12:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60772 ']' 00:05:28.914 12:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.914 12:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.914 12:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.914 12:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.914 12:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.914 [2024-12-16 12:17:35.977445] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:28.914 [2024-12-16 12:17:35.977730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60772 ] 00:05:29.171 [2024-12-16 12:17:36.138265] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.171 [2024-12-16 12:17:36.138299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.430 [2024-12-16 12:17:36.290049] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.364 12:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.364 12:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:30.364 12:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60757 00:05:30.364 12:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60757 00:05:30.364 12:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.364 12:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60757 00:05:30.364 12:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60757 ']' 00:05:30.364 12:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60757 00:05:30.364 12:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:30.364 12:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.364 12:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60757 00:05:30.364 killing process with pid 60757 00:05:30.364 12:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.364 12:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.364 12:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60757' 00:05:30.364 12:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60757 00:05:30.364 12:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60757 00:05:32.893 12:17:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60772 00:05:32.893 12:17:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60772 ']' 00:05:32.893 12:17:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60772 00:05:32.893 12:17:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:32.893 12:17:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.893 12:17:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60772 00:05:32.893 killing process with pid 60772 00:05:32.893 12:17:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.893 12:17:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.893 12:17:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60772' 00:05:32.893 12:17:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60772 00:05:32.893 12:17:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60772 00:05:34.266 00:05:34.266 real 0m6.033s 00:05:34.266 user 0m6.335s 00:05:34.266 sys 0m0.765s 00:05:34.266 ************************************ 00:05:34.266 END TEST non_locking_app_on_locked_coremask 00:05:34.266 ************************************ 00:05:34.266 12:17:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.266 12:17:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.266 12:17:41 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:34.266 12:17:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.266 12:17:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.266 12:17:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.266 ************************************ 00:05:34.266 START TEST locking_app_on_unlocked_coremask 00:05:34.266 ************************************ 00:05:34.266 12:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:34.266 12:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60863 00:05:34.266 12:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60863 /var/tmp/spdk.sock 00:05:34.266 12:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60863 ']' 00:05:34.266 12:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.266 12:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.266 12:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.266 12:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.266 12:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:34.266 12:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.266 [2024-12-16 12:17:41.130549] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:34.266 [2024-12-16 12:17:41.130733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60863 ] 00:05:34.266 [2024-12-16 12:17:41.278318] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:34.266 [2024-12-16 12:17:41.278441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.266 [2024-12-16 12:17:41.354609] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.200 12:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.200 12:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:35.200 12:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:35.200 12:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60879 00:05:35.200 12:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60879 /var/tmp/spdk2.sock 00:05:35.200 12:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60879 ']' 00:05:35.200 12:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.200 12:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.200 12:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.200 12:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.200 12:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.200 [2024-12-16 12:17:42.035123] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:35.200 [2024-12-16 12:17:42.035449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60879 ] 00:05:35.200 [2024-12-16 12:17:42.196274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.457 [2024-12-16 12:17:42.356508] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.391 12:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.391 12:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:36.391 12:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60879 00:05:36.391 12:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60879 00:05:36.391 12:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.649 12:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60863 00:05:36.649 12:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60863 ']' 00:05:36.649 12:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60863 00:05:36.649 12:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:36.649 12:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.649 12:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60863 00:05:36.649 killing process with pid 60863 00:05:36.649 12:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.649 12:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.649 12:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60863' 00:05:36.649 12:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60863 00:05:36.649 12:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60863 00:05:39.231 12:17:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60879 00:05:39.231 12:17:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60879 ']' 00:05:39.231 12:17:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60879 00:05:39.231 12:17:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:39.231 12:17:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.231 12:17:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60879 00:05:39.231 killing process with pid 60879 00:05:39.231 12:17:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.231 12:17:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.231 12:17:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60879' 00:05:39.231 12:17:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60879 00:05:39.231 12:17:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60879 00:05:40.167 ************************************ 00:05:40.167 END TEST locking_app_on_unlocked_coremask 00:05:40.167 ************************************ 00:05:40.167 00:05:40.167 real 0m6.147s 00:05:40.167 user 0m6.433s 00:05:40.167 sys 0m0.775s 00:05:40.167 12:17:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.167 12:17:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.167 12:17:47 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:40.167 12:17:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.167 12:17:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.167 12:17:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.427 ************************************ 00:05:40.427 START TEST locking_app_on_locked_coremask 00:05:40.427 ************************************ 00:05:40.427 12:17:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:40.427 12:17:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60976 00:05:40.427 12:17:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60976 /var/tmp/spdk.sock 00:05:40.427 12:17:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60976 ']' 00:05:40.427 12:17:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.427 12:17:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.427 12:17:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.427 12:17:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.427 12:17:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.427 12:17:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.427 [2024-12-16 12:17:47.351280] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:40.427 [2024-12-16 12:17:47.351399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60976 ] 00:05:40.427 [2024-12-16 12:17:47.508203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.686 [2024-12-16 12:17:47.608271] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.258 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.258 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:41.258 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60986 00:05:41.258 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60986 /var/tmp/spdk2.sock 00:05:41.258 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:41.258 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60986 /var/tmp/spdk2.sock 00:05:41.258 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:41.258 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:41.258 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.258 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:41.258 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.258 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60986 /var/tmp/spdk2.sock 00:05:41.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.258 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60986 ']' 00:05:41.258 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.258 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.258 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.258 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.258 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.258 [2024-12-16 12:17:48.283149] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:41.258 [2024-12-16 12:17:48.283841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60986 ] 00:05:41.518 [2024-12-16 12:17:48.457327] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60976 has claimed it. 00:05:41.518 [2024-12-16 12:17:48.457374] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:42.089 ERROR: process (pid: 60986) is no longer running 00:05:42.089 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60986) - No such process 00:05:42.089 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.089 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:42.089 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:42.089 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:42.089 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:42.089 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:42.089 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60976 00:05:42.089 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60976 00:05:42.089 12:17:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.089 12:17:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60976 00:05:42.089 12:17:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60976 ']' 00:05:42.089 12:17:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60976 00:05:42.089 12:17:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:42.089 12:17:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.089 12:17:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60976 00:05:42.089 killing process with pid 60976 00:05:42.089 12:17:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.089 12:17:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.089 12:17:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60976' 00:05:42.089 12:17:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60976 00:05:42.089 12:17:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60976 00:05:43.469 ************************************ 00:05:43.469 END TEST locking_app_on_locked_coremask 00:05:43.469 ************************************ 00:05:43.469 00:05:43.469 real 0m3.182s 00:05:43.469 user 0m3.406s 00:05:43.469 sys 0m0.491s 00:05:43.469 12:17:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.469 12:17:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.469 12:17:50 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:43.469 12:17:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.469 12:17:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.470 12:17:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.470 ************************************ 00:05:43.470 START TEST locking_overlapped_coremask 00:05:43.470 ************************************ 00:05:43.470 12:17:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:43.470 12:17:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61045 00:05:43.470 12:17:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61045 /var/tmp/spdk.sock 00:05:43.470 12:17:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61045 ']' 00:05:43.470 12:17:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.470 12:17:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:43.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.470 12:17:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.470 12:17:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.470 12:17:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.470 12:17:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.731 [2024-12-16 12:17:50.596994] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:43.731 [2024-12-16 12:17:50.597110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61045 ] 00:05:43.731 [2024-12-16 12:17:50.754601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:43.992 [2024-12-16 12:17:50.853451] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.992 [2024-12-16 12:17:50.853767] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.992 [2024-12-16 12:17:50.853794] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.562 12:17:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.563 12:17:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:44.563 12:17:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61063 00:05:44.563 12:17:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61063 /var/tmp/spdk2.sock 00:05:44.563 12:17:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:44.563 12:17:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61063 /var/tmp/spdk2.sock 00:05:44.563 12:17:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:44.563 12:17:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:44.563 12:17:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.563 12:17:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:44.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.563 12:17:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.563 12:17:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61063 /var/tmp/spdk2.sock 00:05:44.563 12:17:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61063 ']' 00:05:44.563 12:17:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.563 12:17:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.563 12:17:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.563 12:17:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.563 12:17:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.563 [2024-12-16 12:17:51.515873] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:44.563 [2024-12-16 12:17:51.516175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61063 ] 00:05:44.824 [2024-12-16 12:17:51.688215] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61045 has claimed it. 00:05:44.824 [2024-12-16 12:17:51.688262] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:45.085 ERROR: process (pid: 61063) is no longer running 00:05:45.085 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61063) - No such process 00:05:45.085 12:17:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.085 12:17:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:45.085 12:17:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:45.085 12:17:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:45.085 12:17:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:45.085 12:17:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:45.085 12:17:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:45.085 12:17:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:45.085 12:17:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:45.085 12:17:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:45.085 12:17:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61045 00:05:45.085 12:17:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61045 ']' 00:05:45.085 12:17:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61045 00:05:45.085 12:17:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:45.085 12:17:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.085 12:17:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61045 00:05:45.085 12:17:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.085 12:17:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.085 12:17:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61045' 00:05:45.085 killing process with pid 61045 00:05:45.085 12:17:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61045 00:05:45.085 12:17:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61045 00:05:46.508 00:05:46.508 real 0m2.875s 00:05:46.508 user 0m7.824s 00:05:46.508 sys 0m0.392s 00:05:46.508 12:17:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.508 12:17:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.508 ************************************ 00:05:46.508 END TEST locking_overlapped_coremask 00:05:46.508 ************************************ 00:05:46.508 12:17:53 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:46.508 12:17:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.508 12:17:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.508 12:17:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.508 ************************************ 00:05:46.508 START TEST locking_overlapped_coremask_via_rpc 00:05:46.508 ************************************ 00:05:46.508 12:17:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:46.508 12:17:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61116 00:05:46.508 12:17:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61116 /var/tmp/spdk.sock 00:05:46.508 12:17:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:46.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.508 12:17:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61116 ']' 00:05:46.508 12:17:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.508 12:17:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.508 12:17:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.508 12:17:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.508 12:17:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.508 [2024-12-16 12:17:53.511900] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:46.508 [2024-12-16 12:17:53.512179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61116 ] 00:05:46.768 [2024-12-16 12:17:53.666447] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.768 [2024-12-16 12:17:53.666479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.768 [2024-12-16 12:17:53.745664] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.768 [2024-12-16 12:17:53.745923] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.768 [2024-12-16 12:17:53.745941] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.334 12:17:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.334 12:17:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:47.334 12:17:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61128 00:05:47.334 12:17:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:47.334 12:17:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61128 /var/tmp/spdk2.sock 00:05:47.334 12:17:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61128 ']' 00:05:47.334 12:17:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.334 12:17:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.334 12:17:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.334 12:17:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.334 12:17:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.334 [2024-12-16 12:17:54.418787] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:47.334 [2024-12-16 12:17:54.419015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61128 ] 00:05:47.592 [2024-12-16 12:17:54.592039] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:47.592 [2024-12-16 12:17:54.592079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:47.851 [2024-12-16 12:17:54.790949] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.851 [2024-12-16 12:17:54.794239] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.851 [2024-12-16 12:17:54.794262] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.231 [2024-12-16 12:17:55.945266] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61116 has claimed it. 00:05:49.231 request: 00:05:49.231 { 00:05:49.231 "method": "framework_enable_cpumask_locks", 00:05:49.231 "req_id": 1 00:05:49.231 } 00:05:49.231 Got JSON-RPC error response 00:05:49.231 response: 00:05:49.231 { 00:05:49.231 "code": -32603, 00:05:49.231 "message": "Failed to claim CPU core: 2" 00:05:49.231 } 00:05:49.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61116 /var/tmp/spdk.sock 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61116 ']' 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.231 12:17:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.231 12:17:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.231 12:17:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:49.231 12:17:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61128 /var/tmp/spdk2.sock 00:05:49.231 12:17:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61128 ']' 00:05:49.231 12:17:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.231 12:17:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.231 12:17:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.231 12:17:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.231 12:17:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.231 ************************************ 00:05:49.231 END TEST locking_overlapped_coremask_via_rpc 00:05:49.231 ************************************ 00:05:49.231 12:17:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.231 12:17:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:49.231 12:17:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:49.231 12:17:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:49.231 12:17:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:49.231 12:17:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:49.231 00:05:49.231 real 0m2.877s 00:05:49.231 user 0m0.972s 00:05:49.231 sys 0m0.125s 00:05:49.231 12:17:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.231 12:17:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.490 12:17:56 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:49.490 12:17:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61116 ]] 00:05:49.490 12:17:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61116 00:05:49.490 12:17:56 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61116 ']' 00:05:49.490 12:17:56 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61116 00:05:49.490 12:17:56 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:49.490 12:17:56 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.490 12:17:56 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61116 00:05:49.490 killing process with pid 61116 00:05:49.490 12:17:56 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.490 12:17:56 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.490 12:17:56 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61116' 00:05:49.490 12:17:56 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61116 00:05:49.490 12:17:56 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61116 00:05:50.865 12:17:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61128 ]] 00:05:50.865 12:17:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61128 00:05:50.865 12:17:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61128 ']' 00:05:50.865 12:17:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61128 00:05:50.865 12:17:57 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:50.865 12:17:57 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.865 12:17:57 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61128 00:05:50.865 killing process with pid 61128 00:05:50.865 12:17:57 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:50.865 12:17:57 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:50.865 12:17:57 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61128' 00:05:50.865 12:17:57 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61128 00:05:50.865 12:17:57 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61128 00:05:51.802 12:17:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:51.802 Process with pid 61116 is not found 00:05:51.802 Process with pid 61128 is not found 00:05:51.802 12:17:58 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:51.802 12:17:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61116 ]] 00:05:51.802 12:17:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61116 00:05:51.802 12:17:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61116 ']' 00:05:51.802 12:17:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61116 00:05:51.802 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61116) - No such process 00:05:51.802 12:17:58 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61116 is not found' 00:05:51.802 12:17:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61128 ]] 00:05:51.802 12:17:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61128 00:05:51.802 12:17:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61128 ']' 00:05:51.802 12:17:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61128 00:05:51.802 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61128) - No such process 00:05:51.802 12:17:58 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61128 is not found' 00:05:51.802 12:17:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:51.802 ************************************ 00:05:51.802 END TEST cpu_locks 00:05:51.802 ************************************ 00:05:51.802 00:05:51.802 real 0m28.827s 00:05:51.802 user 0m49.787s 00:05:51.802 sys 0m4.188s 00:05:51.802 12:17:58 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.802 12:17:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.802 ************************************ 00:05:51.802 END TEST event 00:05:51.802 ************************************ 00:05:51.802 00:05:51.802 real 0m53.967s 00:05:51.802 user 1m40.758s 00:05:51.802 sys 0m6.934s 00:05:51.802 12:17:58 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.802 12:17:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.802 12:17:58 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:51.802 12:17:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.802 12:17:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.802 12:17:58 -- common/autotest_common.sh@10 -- # set +x 00:05:51.802 ************************************ 00:05:51.802 START TEST thread 00:05:51.802 ************************************ 00:05:51.802 12:17:58 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:51.802 * Looking for test storage... 00:05:51.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:51.802 12:17:58 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:51.802 12:17:58 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:51.802 12:17:58 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:52.063 12:17:58 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:52.063 12:17:58 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.063 12:17:58 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.063 12:17:58 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.063 12:17:58 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.063 12:17:58 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.063 12:17:58 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.063 12:17:58 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.063 12:17:58 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.063 12:17:58 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.063 12:17:58 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.063 12:17:58 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.063 12:17:58 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:52.063 12:17:58 thread -- scripts/common.sh@345 -- # : 1 00:05:52.063 12:17:58 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.063 12:17:58 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.063 12:17:58 thread -- scripts/common.sh@365 -- # decimal 1 00:05:52.063 12:17:58 thread -- scripts/common.sh@353 -- # local d=1 00:05:52.063 12:17:58 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.063 12:17:58 thread -- scripts/common.sh@355 -- # echo 1 00:05:52.063 12:17:58 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.063 12:17:58 thread -- scripts/common.sh@366 -- # decimal 2 00:05:52.063 12:17:58 thread -- scripts/common.sh@353 -- # local d=2 00:05:52.063 12:17:58 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.063 12:17:58 thread -- scripts/common.sh@355 -- # echo 2 00:05:52.063 12:17:58 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.063 12:17:58 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.063 12:17:58 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.063 12:17:58 thread -- scripts/common.sh@368 -- # return 0 00:05:52.063 12:17:58 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.063 12:17:58 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:52.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.063 --rc genhtml_branch_coverage=1 00:05:52.063 --rc genhtml_function_coverage=1 00:05:52.063 --rc genhtml_legend=1 00:05:52.063 --rc geninfo_all_blocks=1 00:05:52.063 --rc geninfo_unexecuted_blocks=1 00:05:52.063 00:05:52.063 ' 00:05:52.063 12:17:58 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:52.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.063 --rc genhtml_branch_coverage=1 00:05:52.063 --rc genhtml_function_coverage=1 00:05:52.064 --rc genhtml_legend=1 00:05:52.064 --rc geninfo_all_blocks=1 00:05:52.064 --rc geninfo_unexecuted_blocks=1 00:05:52.064 00:05:52.064 ' 00:05:52.064 12:17:58 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:52.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.064 --rc genhtml_branch_coverage=1 00:05:52.064 --rc genhtml_function_coverage=1 00:05:52.064 --rc genhtml_legend=1 00:05:52.064 --rc geninfo_all_blocks=1 00:05:52.064 --rc geninfo_unexecuted_blocks=1 00:05:52.064 00:05:52.064 ' 00:05:52.064 12:17:58 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:52.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.064 --rc genhtml_branch_coverage=1 00:05:52.064 --rc genhtml_function_coverage=1 00:05:52.064 --rc genhtml_legend=1 00:05:52.064 --rc geninfo_all_blocks=1 00:05:52.064 --rc geninfo_unexecuted_blocks=1 00:05:52.064 00:05:52.064 ' 00:05:52.064 12:17:58 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:52.064 12:17:58 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:52.064 12:17:58 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.064 12:17:58 thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.064 ************************************ 00:05:52.064 START TEST thread_poller_perf 00:05:52.064 ************************************ 00:05:52.064 12:17:58 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:52.064 [2024-12-16 12:17:59.009393] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:52.064 [2024-12-16 12:17:59.009587] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61288 ] 00:05:52.326 [2024-12-16 12:17:59.170485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.326 [2024-12-16 12:17:59.263404] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.326 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:53.713 [2024-12-16T12:18:00.819Z] ====================================== 00:05:53.713 [2024-12-16T12:18:00.819Z] busy:2613061994 (cyc) 00:05:53.713 [2024-12-16T12:18:00.819Z] total_run_count: 307000 00:05:53.713 [2024-12-16T12:18:00.819Z] tsc_hz: 2600000000 (cyc) 00:05:53.713 [2024-12-16T12:18:00.819Z] ====================================== 00:05:53.713 [2024-12-16T12:18:00.819Z] poller_cost: 8511 (cyc), 3273 (nsec) 00:05:53.713 00:05:53.713 real 0m1.446s 00:05:53.713 user 0m1.271s 00:05:53.713 sys 0m0.068s 00:05:53.713 12:18:00 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.713 12:18:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:53.713 ************************************ 00:05:53.713 END TEST thread_poller_perf 00:05:53.713 ************************************ 00:05:53.713 12:18:00 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:53.713 12:18:00 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:53.713 12:18:00 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.713 12:18:00 thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.713 ************************************ 00:05:53.713 START TEST thread_poller_perf 00:05:53.713 ************************************ 00:05:53.713 12:18:00 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:53.713 [2024-12-16 12:18:00.515013] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:53.713 [2024-12-16 12:18:00.515121] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61319 ] 00:05:53.713 [2024-12-16 12:18:00.675056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.713 [2024-12-16 12:18:00.769927] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.713 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:55.102 [2024-12-16T12:18:02.208Z] ====================================== 00:05:55.102 [2024-12-16T12:18:02.208Z] busy:2603593230 (cyc) 00:05:55.102 [2024-12-16T12:18:02.208Z] total_run_count: 3649000 00:05:55.102 [2024-12-16T12:18:02.208Z] tsc_hz: 2600000000 (cyc) 00:05:55.102 [2024-12-16T12:18:02.208Z] ====================================== 00:05:55.102 [2024-12-16T12:18:02.208Z] poller_cost: 713 (cyc), 274 (nsec) 00:05:55.102 ************************************ 00:05:55.102 END TEST thread_poller_perf 00:05:55.102 ************************************ 00:05:55.102 00:05:55.102 real 0m1.435s 00:05:55.102 user 0m1.262s 00:05:55.102 sys 0m0.065s 00:05:55.102 12:18:01 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.102 12:18:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:55.102 12:18:01 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:55.102 ************************************ 00:05:55.102 END TEST thread 00:05:55.102 ************************************ 00:05:55.102 00:05:55.102 real 0m3.137s 00:05:55.102 user 0m2.646s 00:05:55.102 sys 0m0.246s 00:05:55.102 12:18:01 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.102 12:18:01 thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.102 12:18:02 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:55.102 12:18:02 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:55.102 12:18:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.102 12:18:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.102 12:18:02 -- common/autotest_common.sh@10 -- # set +x 00:05:55.102 ************************************ 00:05:55.102 START TEST app_cmdline 00:05:55.102 ************************************ 00:05:55.102 12:18:02 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:55.102 * Looking for test storage... 00:05:55.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:55.102 12:18:02 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:55.102 12:18:02 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:55.103 12:18:02 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:55.103 12:18:02 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.103 12:18:02 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:55.103 12:18:02 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.103 12:18:02 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:55.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.103 --rc genhtml_branch_coverage=1 00:05:55.103 --rc genhtml_function_coverage=1 00:05:55.103 --rc genhtml_legend=1 00:05:55.103 --rc geninfo_all_blocks=1 00:05:55.103 --rc geninfo_unexecuted_blocks=1 00:05:55.103 00:05:55.103 ' 00:05:55.103 12:18:02 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:55.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.103 --rc genhtml_branch_coverage=1 00:05:55.103 --rc genhtml_function_coverage=1 00:05:55.103 --rc genhtml_legend=1 00:05:55.103 --rc geninfo_all_blocks=1 00:05:55.103 --rc geninfo_unexecuted_blocks=1 00:05:55.103 00:05:55.103 ' 00:05:55.103 12:18:02 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:55.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.103 --rc genhtml_branch_coverage=1 00:05:55.103 --rc genhtml_function_coverage=1 00:05:55.103 --rc genhtml_legend=1 00:05:55.103 --rc geninfo_all_blocks=1 00:05:55.103 --rc geninfo_unexecuted_blocks=1 00:05:55.103 00:05:55.103 ' 00:05:55.103 12:18:02 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:55.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.103 --rc genhtml_branch_coverage=1 00:05:55.103 --rc genhtml_function_coverage=1 00:05:55.103 --rc genhtml_legend=1 00:05:55.103 --rc geninfo_all_blocks=1 00:05:55.103 --rc geninfo_unexecuted_blocks=1 00:05:55.103 00:05:55.103 ' 00:05:55.103 12:18:02 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:55.103 12:18:02 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61408 00:05:55.103 12:18:02 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61408 00:05:55.103 12:18:02 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:55.103 12:18:02 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61408 ']' 00:05:55.103 12:18:02 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.103 12:18:02 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.103 12:18:02 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.103 12:18:02 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.103 12:18:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:55.362 [2024-12-16 12:18:02.236877] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:55.362 [2024-12-16 12:18:02.236997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61408 ] 00:05:55.362 [2024-12-16 12:18:02.397807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.622 [2024-12-16 12:18:02.492702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.189 12:18:03 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.189 12:18:03 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:56.189 12:18:03 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:56.189 { 00:05:56.189 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:05:56.189 "fields": { 00:05:56.189 "major": 25, 00:05:56.189 "minor": 1, 00:05:56.189 "patch": 0, 00:05:56.189 "suffix": "-pre", 00:05:56.189 "commit": "e01cb43b8" 00:05:56.189 } 00:05:56.189 } 00:05:56.189 12:18:03 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:56.189 12:18:03 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:56.189 12:18:03 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:56.189 12:18:03 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:56.189 12:18:03 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:56.189 12:18:03 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:56.189 12:18:03 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:56.189 12:18:03 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.189 12:18:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:56.449 12:18:03 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.449 12:18:03 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:56.449 12:18:03 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:56.449 12:18:03 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:56.449 12:18:03 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:56.449 12:18:03 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:56.449 12:18:03 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:56.449 12:18:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.449 12:18:03 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:56.449 12:18:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.449 12:18:03 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:56.449 12:18:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.449 12:18:03 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:56.449 12:18:03 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:56.449 12:18:03 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:56.449 request: 00:05:56.449 { 00:05:56.449 "method": "env_dpdk_get_mem_stats", 00:05:56.449 "req_id": 1 00:05:56.449 } 00:05:56.449 Got JSON-RPC error response 00:05:56.449 response: 00:05:56.449 { 00:05:56.449 "code": -32601, 00:05:56.449 "message": "Method not found" 00:05:56.449 } 00:05:56.449 12:18:03 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:56.449 12:18:03 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:56.449 12:18:03 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:56.449 12:18:03 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:56.450 12:18:03 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61408 00:05:56.450 12:18:03 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61408 ']' 00:05:56.450 12:18:03 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61408 00:05:56.450 12:18:03 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:56.450 12:18:03 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.450 12:18:03 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61408 00:05:56.450 killing process with pid 61408 00:05:56.450 12:18:03 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.450 12:18:03 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.450 12:18:03 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61408' 00:05:56.450 12:18:03 app_cmdline -- common/autotest_common.sh@973 -- # kill 61408 00:05:56.450 12:18:03 app_cmdline -- common/autotest_common.sh@978 -- # wait 61408 00:05:58.355 ************************************ 00:05:58.355 END TEST app_cmdline 00:05:58.355 ************************************ 00:05:58.355 00:05:58.355 real 0m3.000s 00:05:58.355 user 0m3.245s 00:05:58.355 sys 0m0.410s 00:05:58.355 12:18:05 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.355 12:18:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:58.355 12:18:05 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:58.355 12:18:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.355 12:18:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.355 12:18:05 -- common/autotest_common.sh@10 -- # set +x 00:05:58.355 ************************************ 00:05:58.355 START TEST version 00:05:58.355 ************************************ 00:05:58.355 12:18:05 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:58.355 * Looking for test storage... 00:05:58.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:58.355 12:18:05 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:58.355 12:18:05 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:58.355 12:18:05 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:58.355 12:18:05 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:58.355 12:18:05 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.355 12:18:05 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.355 12:18:05 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.355 12:18:05 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.355 12:18:05 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.355 12:18:05 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.355 12:18:05 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.355 12:18:05 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.355 12:18:05 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.355 12:18:05 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.355 12:18:05 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.355 12:18:05 version -- scripts/common.sh@344 -- # case "$op" in 00:05:58.355 12:18:05 version -- scripts/common.sh@345 -- # : 1 00:05:58.355 12:18:05 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.355 12:18:05 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.355 12:18:05 version -- scripts/common.sh@365 -- # decimal 1 00:05:58.355 12:18:05 version -- scripts/common.sh@353 -- # local d=1 00:05:58.355 12:18:05 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.355 12:18:05 version -- scripts/common.sh@355 -- # echo 1 00:05:58.355 12:18:05 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.355 12:18:05 version -- scripts/common.sh@366 -- # decimal 2 00:05:58.355 12:18:05 version -- scripts/common.sh@353 -- # local d=2 00:05:58.355 12:18:05 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.355 12:18:05 version -- scripts/common.sh@355 -- # echo 2 00:05:58.355 12:18:05 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.355 12:18:05 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.355 12:18:05 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.355 12:18:05 version -- scripts/common.sh@368 -- # return 0 00:05:58.356 12:18:05 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.356 12:18:05 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:58.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.356 --rc genhtml_branch_coverage=1 00:05:58.356 --rc genhtml_function_coverage=1 00:05:58.356 --rc genhtml_legend=1 00:05:58.356 --rc geninfo_all_blocks=1 00:05:58.356 --rc geninfo_unexecuted_blocks=1 00:05:58.356 00:05:58.356 ' 00:05:58.356 12:18:05 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:58.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.356 --rc genhtml_branch_coverage=1 00:05:58.356 --rc genhtml_function_coverage=1 00:05:58.356 --rc genhtml_legend=1 00:05:58.356 --rc geninfo_all_blocks=1 00:05:58.356 --rc geninfo_unexecuted_blocks=1 00:05:58.356 00:05:58.356 ' 00:05:58.356 12:18:05 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:58.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.356 --rc genhtml_branch_coverage=1 00:05:58.356 --rc genhtml_function_coverage=1 00:05:58.356 --rc genhtml_legend=1 00:05:58.356 --rc geninfo_all_blocks=1 00:05:58.356 --rc geninfo_unexecuted_blocks=1 00:05:58.356 00:05:58.356 ' 00:05:58.356 12:18:05 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:58.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.356 --rc genhtml_branch_coverage=1 00:05:58.356 --rc genhtml_function_coverage=1 00:05:58.356 --rc genhtml_legend=1 00:05:58.356 --rc geninfo_all_blocks=1 00:05:58.356 --rc geninfo_unexecuted_blocks=1 00:05:58.356 00:05:58.356 ' 00:05:58.356 12:18:05 version -- app/version.sh@17 -- # get_header_version major 00:05:58.356 12:18:05 version -- app/version.sh@14 -- # cut -f2 00:05:58.356 12:18:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:58.356 12:18:05 version -- app/version.sh@14 -- # tr -d '"' 00:05:58.356 12:18:05 version -- app/version.sh@17 -- # major=25 00:05:58.356 12:18:05 version -- app/version.sh@18 -- # get_header_version minor 00:05:58.356 12:18:05 version -- app/version.sh@14 -- # tr -d '"' 00:05:58.356 12:18:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:58.356 12:18:05 version -- app/version.sh@14 -- # cut -f2 00:05:58.356 12:18:05 version -- app/version.sh@18 -- # minor=1 00:05:58.356 12:18:05 version -- app/version.sh@19 -- # get_header_version patch 00:05:58.356 12:18:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:58.356 12:18:05 version -- app/version.sh@14 -- # cut -f2 00:05:58.356 12:18:05 version -- app/version.sh@14 -- # tr -d '"' 00:05:58.356 12:18:05 version -- app/version.sh@19 -- # patch=0 00:05:58.356 12:18:05 version -- app/version.sh@20 -- # get_header_version suffix 00:05:58.356 12:18:05 version -- app/version.sh@14 -- # cut -f2 00:05:58.356 12:18:05 version -- app/version.sh@14 -- # tr -d '"' 00:05:58.356 12:18:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:58.356 12:18:05 version -- app/version.sh@20 -- # suffix=-pre 00:05:58.356 12:18:05 version -- app/version.sh@22 -- # version=25.1 00:05:58.356 12:18:05 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:58.356 12:18:05 version -- app/version.sh@28 -- # version=25.1rc0 00:05:58.356 12:18:05 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:58.356 12:18:05 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:58.356 12:18:05 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:58.356 12:18:05 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:58.356 ************************************ 00:05:58.356 END TEST version 00:05:58.356 ************************************ 00:05:58.356 00:05:58.356 real 0m0.181s 00:05:58.356 user 0m0.122s 00:05:58.356 sys 0m0.087s 00:05:58.356 12:18:05 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.356 12:18:05 version -- common/autotest_common.sh@10 -- # set +x 00:05:58.356 12:18:05 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:58.356 12:18:05 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:58.356 12:18:05 -- spdk/autotest.sh@194 -- # uname -s 00:05:58.356 12:18:05 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:58.356 12:18:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:58.356 12:18:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:58.356 12:18:05 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:58.356 12:18:05 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:58.356 12:18:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:58.356 12:18:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.356 12:18:05 -- common/autotest_common.sh@10 -- # set +x 00:05:58.356 ************************************ 00:05:58.356 START TEST blockdev_nvme 00:05:58.356 ************************************ 00:05:58.356 12:18:05 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:58.356 * Looking for test storage... 00:05:58.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:58.356 12:18:05 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:58.356 12:18:05 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:05:58.356 12:18:05 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:58.356 12:18:05 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:58.356 12:18:05 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.356 12:18:05 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.356 12:18:05 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.356 12:18:05 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.356 12:18:05 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.356 12:18:05 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.356 12:18:05 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.356 12:18:05 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.356 12:18:05 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.356 12:18:05 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.356 12:18:05 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.356 12:18:05 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:58.356 12:18:05 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:58.356 12:18:05 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.356 12:18:05 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.356 12:18:05 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:58.356 12:18:05 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:58.356 12:18:05 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.356 12:18:05 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:58.356 12:18:05 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.356 12:18:05 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:58.356 12:18:05 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:58.356 12:18:05 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.356 12:18:05 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:58.613 12:18:05 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.613 12:18:05 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.613 12:18:05 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.613 12:18:05 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:58.613 12:18:05 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.613 12:18:05 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:58.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.613 --rc genhtml_branch_coverage=1 00:05:58.613 --rc genhtml_function_coverage=1 00:05:58.613 --rc genhtml_legend=1 00:05:58.613 --rc geninfo_all_blocks=1 00:05:58.613 --rc geninfo_unexecuted_blocks=1 00:05:58.613 00:05:58.613 ' 00:05:58.613 12:18:05 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:58.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.613 --rc genhtml_branch_coverage=1 00:05:58.613 --rc genhtml_function_coverage=1 00:05:58.613 --rc genhtml_legend=1 00:05:58.613 --rc geninfo_all_blocks=1 00:05:58.613 --rc geninfo_unexecuted_blocks=1 00:05:58.613 00:05:58.613 ' 00:05:58.613 12:18:05 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:58.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.613 --rc genhtml_branch_coverage=1 00:05:58.613 --rc genhtml_function_coverage=1 00:05:58.613 --rc genhtml_legend=1 00:05:58.613 --rc geninfo_all_blocks=1 00:05:58.613 --rc geninfo_unexecuted_blocks=1 00:05:58.613 00:05:58.613 ' 00:05:58.613 12:18:05 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:58.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.613 --rc genhtml_branch_coverage=1 00:05:58.613 --rc genhtml_function_coverage=1 00:05:58.613 --rc genhtml_legend=1 00:05:58.613 --rc geninfo_all_blocks=1 00:05:58.613 --rc geninfo_unexecuted_blocks=1 00:05:58.613 00:05:58.613 ' 00:05:58.613 12:18:05 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:58.613 12:18:05 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:58.613 12:18:05 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:58.613 12:18:05 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:58.613 12:18:05 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:58.613 12:18:05 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:58.613 12:18:05 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:58.613 12:18:05 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:58.613 12:18:05 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:58.613 12:18:05 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:05:58.613 12:18:05 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:05:58.613 12:18:05 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:05:58.613 12:18:05 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:05:58.613 12:18:05 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:05:58.613 12:18:05 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:05:58.613 12:18:05 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:05:58.613 12:18:05 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:05:58.613 12:18:05 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:05:58.613 12:18:05 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:05:58.613 12:18:05 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:05:58.613 12:18:05 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:05:58.613 12:18:05 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:05:58.613 12:18:05 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:05:58.613 12:18:05 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:05:58.614 12:18:05 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61580 00:05:58.614 12:18:05 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:58.614 12:18:05 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61580 00:05:58.614 12:18:05 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61580 ']' 00:05:58.614 12:18:05 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.614 12:18:05 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.614 12:18:05 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:58.614 12:18:05 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.614 12:18:05 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.614 12:18:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:58.614 [2024-12-16 12:18:05.540482] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:58.614 [2024-12-16 12:18:05.540777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61580 ] 00:05:58.614 [2024-12-16 12:18:05.699143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.871 [2024-12-16 12:18:05.793420] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.442 12:18:06 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.442 12:18:06 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:05:59.442 12:18:06 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:05:59.442 12:18:06 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:05:59.442 12:18:06 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:59.442 12:18:06 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:59.442 12:18:06 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:59.442 12:18:06 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:59.442 12:18:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.442 12:18:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:59.702 12:18:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.702 12:18:06 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:05:59.702 12:18:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.702 12:18:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:59.702 12:18:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.702 12:18:06 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:05:59.702 12:18:06 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:05:59.702 12:18:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.702 12:18:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:59.702 12:18:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.702 12:18:06 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:05:59.702 12:18:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.702 12:18:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:59.702 12:18:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.702 12:18:06 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:59.702 12:18:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.702 12:18:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:59.702 12:18:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.702 12:18:06 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:05:59.702 12:18:06 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:05:59.702 12:18:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.702 12:18:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:59.702 12:18:06 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:05:59.961 12:18:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.961 12:18:06 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:05:59.961 12:18:06 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:05:59.962 12:18:06 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "e8ef2974-2714-4bea-8520-f5793907ab5e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e8ef2974-2714-4bea-8520-f5793907ab5e",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "5f8218ba-6175-4c0b-96e2-62bce04c4417"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5f8218ba-6175-4c0b-96e2-62bce04c4417",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "5b762a4f-7aae-43ec-bb51-436dfb5b77dd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5b762a4f-7aae-43ec-bb51-436dfb5b77dd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "a179c384-0cc0-4be5-9ede-10990a5f2cd5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a179c384-0cc0-4be5-9ede-10990a5f2cd5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "606a1b4c-2e4c-43e8-9655-50dea01d5723"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "606a1b4c-2e4c-43e8-9655-50dea01d5723",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "61a66a92-002f-446e-8766-e00f83f9d1e5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "61a66a92-002f-446e-8766-e00f83f9d1e5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:59.962 12:18:06 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:05:59.962 12:18:06 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:05:59.962 12:18:06 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:05:59.962 12:18:06 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61580 00:05:59.962 12:18:06 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61580 ']' 00:05:59.962 12:18:06 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61580 00:05:59.962 12:18:06 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:05:59.962 12:18:06 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.962 12:18:06 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61580 00:05:59.962 killing process with pid 61580 00:05:59.962 12:18:06 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.962 12:18:06 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.962 12:18:06 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61580' 00:05:59.962 12:18:06 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61580 00:05:59.962 12:18:06 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61580 00:06:01.336 12:18:08 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:01.336 12:18:08 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:01.336 12:18:08 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:01.336 12:18:08 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.336 12:18:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:01.336 ************************************ 00:06:01.336 START TEST bdev_hello_world 00:06:01.336 ************************************ 00:06:01.337 12:18:08 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:01.337 [2024-12-16 12:18:08.435975] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:01.337 [2024-12-16 12:18:08.436083] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61664 ] 00:06:01.595 [2024-12-16 12:18:08.594629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.595 [2024-12-16 12:18:08.687563] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.162 [2024-12-16 12:18:09.225123] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:02.162 [2024-12-16 12:18:09.225178] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:02.162 [2024-12-16 12:18:09.225199] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:02.162 [2024-12-16 12:18:09.227625] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:02.162 [2024-12-16 12:18:09.228261] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:02.162 [2024-12-16 12:18:09.228296] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:02.162 [2024-12-16 12:18:09.228511] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:02.162 00:06:02.162 [2024-12-16 12:18:09.228536] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:03.099 00:06:03.099 real 0m1.575s 00:06:03.099 user 0m1.286s 00:06:03.099 sys 0m0.182s 00:06:03.099 12:18:09 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.099 12:18:09 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:03.099 ************************************ 00:06:03.099 END TEST bdev_hello_world 00:06:03.099 ************************************ 00:06:03.099 12:18:09 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:03.099 12:18:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:03.099 12:18:09 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.099 12:18:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:03.099 ************************************ 00:06:03.099 START TEST bdev_bounds 00:06:03.099 ************************************ 00:06:03.099 12:18:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:03.099 12:18:09 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61701 00:06:03.099 12:18:09 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.099 Process bdevio pid: 61701 00:06:03.099 12:18:09 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61701' 00:06:03.099 12:18:09 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61701 00:06:03.099 12:18:09 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:03.099 12:18:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61701 ']' 00:06:03.099 12:18:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.099 12:18:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.099 12:18:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.099 12:18:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.099 12:18:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:03.099 [2024-12-16 12:18:10.051644] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:03.099 [2024-12-16 12:18:10.051764] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61701 ] 00:06:03.358 [2024-12-16 12:18:10.211294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.358 [2024-12-16 12:18:10.310584] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.358 [2024-12-16 12:18:10.310770] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.358 [2024-12-16 12:18:10.311006] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.925 12:18:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.925 12:18:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:03.925 12:18:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:04.184 I/O targets: 00:06:04.184 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:04.184 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:04.184 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:04.184 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:04.184 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:04.184 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:04.184 00:06:04.184 00:06:04.184 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.184 http://cunit.sourceforge.net/ 00:06:04.184 00:06:04.184 00:06:04.184 Suite: bdevio tests on: Nvme3n1 00:06:04.184 Test: blockdev write read block ...passed 00:06:04.184 Test: blockdev write zeroes read block ...passed 00:06:04.184 Test: blockdev write zeroes read no split ...passed 00:06:04.184 Test: blockdev write zeroes read split ...passed 00:06:04.184 Test: blockdev write zeroes read split partial ...passed 00:06:04.184 Test: blockdev reset ...[2024-12-16 12:18:11.100844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:04.184 [2024-12-16 12:18:11.104865] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:04.184 passed 00:06:04.184 Test: blockdev write read 8 blocks ...passed 00:06:04.184 Test: blockdev write read size > 128k ...passed 00:06:04.184 Test: blockdev write read invalid size ...passed 00:06:04.184 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:04.184 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:04.184 Test: blockdev write read max offset ...passed 00:06:04.184 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:04.184 Test: blockdev writev readv 8 blocks ...passed 00:06:04.184 Test: blockdev writev readv 30 x 1block ...passed 00:06:04.184 Test: blockdev writev readv block ...passed 00:06:04.184 Test: blockdev writev readv size > 128k ...passed 00:06:04.184 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:04.184 Test: blockdev comparev and writev ...passed 00:06:04.184 Test: blockdev nvme passthru rw ...[2024-12-16 12:18:11.111150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b680a000 len:0x1000 00:06:04.184 [2024-12-16 12:18:11.111206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:04.184 passed 00:06:04.184 Test: blockdev nvme passthru vendor specific ...passed 00:06:04.184 Test: blockdev nvme admin passthru ...[2024-12-16 12:18:11.111670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:04.184 [2024-12-16 12:18:11.111704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:04.184 passed 00:06:04.184 Test: blockdev copy ...passed 00:06:04.184 Suite: bdevio tests on: Nvme2n3 00:06:04.184 Test: blockdev write read block ...passed 00:06:04.184 Test: blockdev write zeroes read block ...passed 00:06:04.184 Test: blockdev write zeroes read no split ...passed 00:06:04.184 Test: blockdev write zeroes read split ...passed 00:06:04.184 Test: blockdev write zeroes read split partial ...passed 00:06:04.184 Test: blockdev reset ...[2024-12-16 12:18:11.167298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:04.184 passed 00:06:04.184 Test: blockdev write read 8 blocks ...[2024-12-16 12:18:11.170277] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:04.184 passed 00:06:04.184 Test: blockdev write read size > 128k ...passed 00:06:04.184 Test: blockdev write read invalid size ...passed 00:06:04.184 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:04.184 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:04.184 Test: blockdev write read max offset ...passed 00:06:04.184 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:04.184 Test: blockdev writev readv 8 blocks ...passed 00:06:04.184 Test: blockdev writev readv 30 x 1block ...passed 00:06:04.184 Test: blockdev writev readv block ...passed 00:06:04.184 Test: blockdev writev readv size > 128k ...passed 00:06:04.184 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:04.184 Test: blockdev comparev and writev ...passed 00:06:04.184 Test: blockdev nvme passthru rw ...[2024-12-16 12:18:11.177351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x299a06000 len:0x1000 00:06:04.184 [2024-12-16 12:18:11.177385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:04.184 passed 00:06:04.184 Test: blockdev nvme passthru vendor specific ...[2024-12-16 12:18:11.178026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:04.184 [2024-12-16 12:18:11.178050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:04.184 passed 00:06:04.184 Test: blockdev nvme admin passthru ...passed 00:06:04.184 Test: blockdev copy ...passed 00:06:04.184 Suite: bdevio tests on: Nvme2n2 00:06:04.184 Test: blockdev write read block ...passed 00:06:04.184 Test: blockdev write zeroes read block ...passed 00:06:04.184 Test: blockdev write zeroes read no split ...passed 00:06:04.184 Test: blockdev write zeroes read split ...passed 00:06:04.184 Test: blockdev write zeroes read split partial ...passed 00:06:04.184 Test: blockdev reset ...[2024-12-16 12:18:11.233630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:04.184 passed 00:06:04.184 Test: blockdev write read 8 blocks ...[2024-12-16 12:18:11.237853] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:04.184 passed 00:06:04.184 Test: blockdev write read size > 128k ...passed 00:06:04.184 Test: blockdev write read invalid size ...passed 00:06:04.184 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:04.184 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:04.184 Test: blockdev write read max offset ...passed 00:06:04.184 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:04.184 Test: blockdev writev readv 8 blocks ...passed 00:06:04.184 Test: blockdev writev readv 30 x 1block ...passed 00:06:04.184 Test: blockdev writev readv block ...passed 00:06:04.184 Test: blockdev writev readv size > 128k ...passed 00:06:04.185 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:04.185 Test: blockdev comparev and writev ...[2024-12-16 12:18:11.244108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c8c3c000 len:0x1000 00:06:04.185 [2024-12-16 12:18:11.244144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:04.185 passed 00:06:04.185 Test: blockdev nvme passthru rw ...passed 00:06:04.185 Test: blockdev nvme passthru vendor specific ...passed 00:06:04.185 Test: blockdev nvme admin passthru ...[2024-12-16 12:18:11.244674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:04.185 [2024-12-16 12:18:11.244692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:04.185 passed 00:06:04.185 Test: blockdev copy ...passed 00:06:04.185 Suite: bdevio tests on: Nvme2n1 00:06:04.185 Test: blockdev write read block ...passed 00:06:04.185 Test: blockdev write zeroes read block ...passed 00:06:04.185 Test: blockdev write zeroes read no split ...passed 00:06:04.185 Test: blockdev write zeroes read split ...passed 00:06:04.443 Test: blockdev write zeroes read split partial ...passed 00:06:04.443 Test: blockdev reset ...[2024-12-16 12:18:11.301880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:04.443 [2024-12-16 12:18:11.304932] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:04.443 passed 00:06:04.443 Test: blockdev write read 8 blocks ...passed 00:06:04.443 Test: blockdev write read size > 128k ...passed 00:06:04.443 Test: blockdev write read invalid size ...passed 00:06:04.443 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:04.443 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:04.443 Test: blockdev write read max offset ...passed 00:06:04.443 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:04.443 Test: blockdev writev readv 8 blocks ...passed 00:06:04.443 Test: blockdev writev readv 30 x 1block ...passed 00:06:04.443 Test: blockdev writev readv block ...passed 00:06:04.443 Test: blockdev writev readv size > 128k ...passed 00:06:04.443 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:04.443 Test: blockdev comparev and writev ...passed 00:06:04.443 Test: blockdev nvme passthru rw ...[2024-12-16 12:18:11.310870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c8c38000 len:0x1000 00:06:04.443 [2024-12-16 12:18:11.310912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:04.443 passed 00:06:04.443 Test: blockdev nvme passthru vendor specific ...passed 00:06:04.443 Test: blockdev nvme admin passthru ...[2024-12-16 12:18:11.311388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:04.443 [2024-12-16 12:18:11.311412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:04.443 passed 00:06:04.443 Test: blockdev copy ...passed 00:06:04.443 Suite: bdevio tests on: Nvme1n1 00:06:04.443 Test: blockdev write read block ...passed 00:06:04.443 Test: blockdev write zeroes read block ...passed 00:06:04.443 Test: blockdev write zeroes read no split ...passed 00:06:04.443 Test: blockdev write zeroes read split ...passed 00:06:04.443 Test: blockdev write zeroes read split partial ...passed 00:06:04.443 Test: blockdev reset ...[2024-12-16 12:18:11.354501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:04.443 passed 00:06:04.443 Test: blockdev write read 8 blocks ...[2024-12-16 12:18:11.357262] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:04.443 passed 00:06:04.443 Test: blockdev write read size > 128k ...passed 00:06:04.443 Test: blockdev write read invalid size ...passed 00:06:04.443 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:04.443 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:04.443 Test: blockdev write read max offset ...passed 00:06:04.443 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:04.443 Test: blockdev writev readv 8 blocks ...passed 00:06:04.443 Test: blockdev writev readv 30 x 1block ...passed 00:06:04.443 Test: blockdev writev readv block ...passed 00:06:04.443 Test: blockdev writev readv size > 128k ...passed 00:06:04.443 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:04.443 Test: blockdev comparev and writev ...passed 00:06:04.443 Test: blockdev nvme passthru rw ...[2024-12-16 12:18:11.364054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c8c34000 len:0x1000 00:06:04.443 [2024-12-16 12:18:11.364094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:04.443 passed 00:06:04.443 Test: blockdev nvme passthru vendor specific ...passed 00:06:04.443 Test: blockdev nvme admin passthru ...[2024-12-16 12:18:11.364629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:04.443 [2024-12-16 12:18:11.364653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:04.443 passed 00:06:04.443 Test: blockdev copy ...passed 00:06:04.443 Suite: bdevio tests on: Nvme0n1 00:06:04.443 Test: blockdev write read block ...passed 00:06:04.443 Test: blockdev write zeroes read block ...passed 00:06:04.443 Test: blockdev write zeroes read no split ...passed 00:06:04.443 Test: blockdev write zeroes read split ...passed 00:06:04.443 Test: blockdev write zeroes read split partial ...passed 00:06:04.443 Test: blockdev reset ...[2024-12-16 12:18:11.422667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:04.443 passed 00:06:04.443 Test: blockdev write read 8 blocks ...[2024-12-16 12:18:11.425116] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:04.443 passed 00:06:04.443 Test: blockdev write read size > 128k ...passed 00:06:04.443 Test: blockdev write read invalid size ...passed 00:06:04.443 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:04.443 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:04.443 Test: blockdev write read max offset ...passed 00:06:04.443 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:04.443 Test: blockdev writev readv 8 blocks ...passed 00:06:04.443 Test: blockdev writev readv 30 x 1block ...passed 00:06:04.443 Test: blockdev writev readv block ...passed 00:06:04.444 Test: blockdev writev readv size > 128k ...passed 00:06:04.444 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:04.444 Test: blockdev comparev and writev ...[2024-12-16 12:18:11.431017] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:04.444 separate metadata which is not supported yet. 00:06:04.444 passed 00:06:04.444 Test: blockdev nvme passthru rw ...passed 00:06:04.444 Test: blockdev nvme passthru vendor specific ...passed 00:06:04.444 Test: blockdev nvme admin passthru ...[2024-12-16 12:18:11.431316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:04.444 [2024-12-16 12:18:11.431351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:04.444 passed 00:06:04.444 Test: blockdev copy ...passed 00:06:04.444 00:06:04.444 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.444 suites 6 6 n/a 0 0 00:06:04.444 tests 138 138 138 0 0 00:06:04.444 asserts 893 893 893 0 n/a 00:06:04.444 00:06:04.444 Elapsed time = 0.990 seconds 00:06:04.444 0 00:06:04.444 12:18:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61701 00:06:04.444 12:18:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61701 ']' 00:06:04.444 12:18:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61701 00:06:04.444 12:18:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:04.444 12:18:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.444 12:18:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61701 00:06:04.444 12:18:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.444 killing process with pid 61701 00:06:04.444 12:18:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.444 12:18:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61701' 00:06:04.444 12:18:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61701 00:06:04.444 12:18:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61701 00:06:05.010 12:18:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:05.010 00:06:05.010 real 0m2.015s 00:06:05.010 user 0m5.282s 00:06:05.010 sys 0m0.240s 00:06:05.010 ************************************ 00:06:05.011 END TEST bdev_bounds 00:06:05.011 ************************************ 00:06:05.011 12:18:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.011 12:18:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:05.011 12:18:12 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:05.011 12:18:12 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:05.011 12:18:12 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.011 12:18:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:05.011 ************************************ 00:06:05.011 START TEST bdev_nbd 00:06:05.011 ************************************ 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61755 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61755 /var/tmp/spdk-nbd.sock 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61755 ']' 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.011 12:18:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:05.270 [2024-12-16 12:18:12.123972] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:05.270 [2024-12-16 12:18:12.124056] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:05.270 [2024-12-16 12:18:12.274166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.270 [2024-12-16 12:18:12.351926] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.223 12:18:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.223 12:18:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:06.223 12:18:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:06.223 12:18:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.223 12:18:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:06.223 12:18:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:06.223 12:18:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:06.223 12:18:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.223 12:18:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:06.223 12:18:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:06.223 12:18:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:06.223 12:18:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:06.223 12:18:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:06.223 12:18:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:06.223 12:18:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:06.223 12:18:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:06.223 12:18:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:06.223 12:18:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:06.223 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:06.223 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:06.223 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.223 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.223 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:06.223 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:06.223 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.224 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.224 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:06.224 1+0 records in 00:06:06.224 1+0 records out 00:06:06.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431708 s, 9.5 MB/s 00:06:06.224 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.224 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:06.224 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.224 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.224 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:06.224 12:18:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:06.224 12:18:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:06.224 12:18:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:06.482 12:18:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:06.482 12:18:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:06.482 12:18:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:06.482 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:06.482 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:06.482 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.482 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.482 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:06.482 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:06.482 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.482 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.482 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:06.482 1+0 records in 00:06:06.482 1+0 records out 00:06:06.482 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273902 s, 15.0 MB/s 00:06:06.482 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.482 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:06.482 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.482 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.482 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:06.482 12:18:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:06.483 12:18:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:06.483 12:18:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:06.741 12:18:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:06.741 12:18:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:06.741 12:18:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:06.741 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:06.741 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:06.741 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.741 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.741 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:06.741 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:06.741 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.741 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.741 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:06.741 1+0 records in 00:06:06.741 1+0 records out 00:06:06.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513195 s, 8.0 MB/s 00:06:06.741 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.741 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:06.741 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.741 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.741 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:06.741 12:18:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:06.741 12:18:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:06.741 12:18:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:07.000 12:18:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:07.000 12:18:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:07.000 12:18:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:07.000 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:07.000 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:07.000 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.000 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.000 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:07.000 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:07.000 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.000 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.000 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:07.000 1+0 records in 00:06:07.000 1+0 records out 00:06:07.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252734 s, 16.2 MB/s 00:06:07.000 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.000 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:07.000 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.000 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.000 12:18:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:07.000 12:18:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:07.000 12:18:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:07.000 12:18:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:07.258 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:07.258 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:07.258 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:07.258 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:07.258 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:07.258 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.258 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.258 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:07.258 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:07.258 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.258 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.258 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:07.258 1+0 records in 00:06:07.258 1+0 records out 00:06:07.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364641 s, 11.2 MB/s 00:06:07.258 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.258 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:07.258 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.258 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.258 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:07.258 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:07.258 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:07.259 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:07.259 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:07.259 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:07.259 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:07.259 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:07.259 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:07.259 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.259 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.259 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:07.517 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:07.517 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.517 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.517 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:07.517 1+0 records in 00:06:07.517 1+0 records out 00:06:07.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346378 s, 11.8 MB/s 00:06:07.517 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.517 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:07.517 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.517 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.517 12:18:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:07.517 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:07.517 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:07.517 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.517 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:07.517 { 00:06:07.517 "nbd_device": "/dev/nbd0", 00:06:07.517 "bdev_name": "Nvme0n1" 00:06:07.517 }, 00:06:07.517 { 00:06:07.517 "nbd_device": "/dev/nbd1", 00:06:07.517 "bdev_name": "Nvme1n1" 00:06:07.517 }, 00:06:07.517 { 00:06:07.517 "nbd_device": "/dev/nbd2", 00:06:07.517 "bdev_name": "Nvme2n1" 00:06:07.517 }, 00:06:07.517 { 00:06:07.517 "nbd_device": "/dev/nbd3", 00:06:07.517 "bdev_name": "Nvme2n2" 00:06:07.517 }, 00:06:07.517 { 00:06:07.517 "nbd_device": "/dev/nbd4", 00:06:07.517 "bdev_name": "Nvme2n3" 00:06:07.517 }, 00:06:07.517 { 00:06:07.517 "nbd_device": "/dev/nbd5", 00:06:07.517 "bdev_name": "Nvme3n1" 00:06:07.517 } 00:06:07.517 ]' 00:06:07.517 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:07.517 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:07.517 { 00:06:07.517 "nbd_device": "/dev/nbd0", 00:06:07.517 "bdev_name": "Nvme0n1" 00:06:07.517 }, 00:06:07.517 { 00:06:07.517 "nbd_device": "/dev/nbd1", 00:06:07.517 "bdev_name": "Nvme1n1" 00:06:07.517 }, 00:06:07.517 { 00:06:07.517 "nbd_device": "/dev/nbd2", 00:06:07.517 "bdev_name": "Nvme2n1" 00:06:07.517 }, 00:06:07.517 { 00:06:07.517 "nbd_device": "/dev/nbd3", 00:06:07.517 "bdev_name": "Nvme2n2" 00:06:07.517 }, 00:06:07.517 { 00:06:07.517 "nbd_device": "/dev/nbd4", 00:06:07.517 "bdev_name": "Nvme2n3" 00:06:07.517 }, 00:06:07.517 { 00:06:07.517 "nbd_device": "/dev/nbd5", 00:06:07.517 "bdev_name": "Nvme3n1" 00:06:07.517 } 00:06:07.517 ]' 00:06:07.517 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:07.517 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:07.517 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.517 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:07.517 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.517 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:07.517 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.517 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.775 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.775 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.775 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.775 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.775 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.775 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.775 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:07.775 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.775 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.775 12:18:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.032 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.032 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.032 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.032 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.032 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.032 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:08.032 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:08.032 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.032 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.032 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:08.289 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:08.289 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:08.289 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:08.289 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.289 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.289 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:08.289 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:08.289 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.289 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.289 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:08.548 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:08.548 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:08.548 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:08.548 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.548 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.548 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:08.548 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:08.548 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.548 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.548 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:08.548 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:08.548 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:08.548 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:08.548 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.548 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.548 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:08.808 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:08.808 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.808 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.808 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:08.808 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:08.808 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:08.808 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:08.808 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.808 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.808 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:08.808 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:08.808 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.808 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.808 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.808 12:18:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:09.069 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:09.331 /dev/nbd0 00:06:09.331 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:09.331 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:09.331 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:09.331 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:09.331 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:09.331 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:09.331 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:09.331 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:09.331 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:09.331 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:09.331 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:09.331 1+0 records in 00:06:09.331 1+0 records out 00:06:09.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105912 s, 3.9 MB/s 00:06:09.331 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.331 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:09.331 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.331 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:09.331 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:09.331 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.331 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:09.331 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:09.592 /dev/nbd1 00:06:09.592 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.592 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.592 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:09.592 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:09.592 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:09.592 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:09.592 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:09.592 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:09.592 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:09.592 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:09.592 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:09.592 1+0 records in 00:06:09.592 1+0 records out 00:06:09.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000991611 s, 4.1 MB/s 00:06:09.592 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.592 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:09.592 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.592 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:09.592 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:09.592 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.592 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:09.592 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:09.852 /dev/nbd10 00:06:09.852 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:09.852 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:09.852 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:09.852 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:09.852 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:09.852 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:09.852 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:09.852 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:09.852 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:09.852 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:09.852 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:09.852 1+0 records in 00:06:09.852 1+0 records out 00:06:09.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000894527 s, 4.6 MB/s 00:06:09.852 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.852 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:09.852 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.852 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:09.852 12:18:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:09.852 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.852 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:09.852 12:18:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:10.113 /dev/nbd11 00:06:10.113 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:10.113 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:10.113 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:10.113 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:10.113 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.113 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.113 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:10.113 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:10.113 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.113 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.113 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:10.113 1+0 records in 00:06:10.113 1+0 records out 00:06:10.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00139199 s, 2.9 MB/s 00:06:10.113 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.113 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:10.113 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.113 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.113 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:10.113 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.113 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:10.113 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:10.373 /dev/nbd12 00:06:10.373 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:10.373 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:10.373 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:10.373 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:10.373 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.373 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.373 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:10.373 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:10.373 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.373 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.373 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:10.373 1+0 records in 00:06:10.373 1+0 records out 00:06:10.373 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00193588 s, 2.1 MB/s 00:06:10.373 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.373 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:10.373 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.373 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.373 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:10.373 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.373 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:10.373 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:10.634 /dev/nbd13 00:06:10.634 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:10.634 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:10.634 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:10.634 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:10.634 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.634 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.634 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:10.634 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:10.634 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.634 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.634 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:10.634 1+0 records in 00:06:10.634 1+0 records out 00:06:10.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000928643 s, 4.4 MB/s 00:06:10.634 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.634 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:10.634 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.634 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.634 12:18:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:10.634 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.634 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:10.634 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.634 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.634 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.634 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:10.634 { 00:06:10.634 "nbd_device": "/dev/nbd0", 00:06:10.634 "bdev_name": "Nvme0n1" 00:06:10.634 }, 00:06:10.634 { 00:06:10.634 "nbd_device": "/dev/nbd1", 00:06:10.634 "bdev_name": "Nvme1n1" 00:06:10.634 }, 00:06:10.634 { 00:06:10.634 "nbd_device": "/dev/nbd10", 00:06:10.634 "bdev_name": "Nvme2n1" 00:06:10.634 }, 00:06:10.634 { 00:06:10.634 "nbd_device": "/dev/nbd11", 00:06:10.634 "bdev_name": "Nvme2n2" 00:06:10.634 }, 00:06:10.634 { 00:06:10.634 "nbd_device": "/dev/nbd12", 00:06:10.634 "bdev_name": "Nvme2n3" 00:06:10.634 }, 00:06:10.634 { 00:06:10.634 "nbd_device": "/dev/nbd13", 00:06:10.634 "bdev_name": "Nvme3n1" 00:06:10.634 } 00:06:10.634 ]' 00:06:10.634 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:10.634 { 00:06:10.634 "nbd_device": "/dev/nbd0", 00:06:10.634 "bdev_name": "Nvme0n1" 00:06:10.634 }, 00:06:10.634 { 00:06:10.634 "nbd_device": "/dev/nbd1", 00:06:10.634 "bdev_name": "Nvme1n1" 00:06:10.634 }, 00:06:10.634 { 00:06:10.634 "nbd_device": "/dev/nbd10", 00:06:10.634 "bdev_name": "Nvme2n1" 00:06:10.634 }, 00:06:10.634 { 00:06:10.634 "nbd_device": "/dev/nbd11", 00:06:10.634 "bdev_name": "Nvme2n2" 00:06:10.634 }, 00:06:10.634 { 00:06:10.634 "nbd_device": "/dev/nbd12", 00:06:10.634 "bdev_name": "Nvme2n3" 00:06:10.634 }, 00:06:10.634 { 00:06:10.634 "nbd_device": "/dev/nbd13", 00:06:10.634 "bdev_name": "Nvme3n1" 00:06:10.634 } 00:06:10.634 ]' 00:06:10.634 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.916 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:10.916 /dev/nbd1 00:06:10.916 /dev/nbd10 00:06:10.916 /dev/nbd11 00:06:10.916 /dev/nbd12 00:06:10.916 /dev/nbd13' 00:06:10.916 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.916 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:10.916 /dev/nbd1 00:06:10.916 /dev/nbd10 00:06:10.916 /dev/nbd11 00:06:10.916 /dev/nbd12 00:06:10.916 /dev/nbd13' 00:06:10.916 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:10.917 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:10.917 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:10.917 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:10.917 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:10.917 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:10.917 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.917 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:10.917 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:10.917 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:10.917 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:10.917 256+0 records in 00:06:10.917 256+0 records out 00:06:10.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107196 s, 97.8 MB/s 00:06:10.917 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.917 12:18:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.208 256+0 records in 00:06:11.208 256+0 records out 00:06:11.208 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.224219 s, 4.7 MB/s 00:06:11.208 12:18:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.208 12:18:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.208 256+0 records in 00:06:11.208 256+0 records out 00:06:11.208 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.225455 s, 4.7 MB/s 00:06:11.208 12:18:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.208 12:18:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:11.469 256+0 records in 00:06:11.469 256+0 records out 00:06:11.469 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.230455 s, 4.6 MB/s 00:06:11.469 12:18:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.469 12:18:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:11.731 256+0 records in 00:06:11.731 256+0 records out 00:06:11.731 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15027 s, 7.0 MB/s 00:06:11.731 12:18:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.731 12:18:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:11.992 256+0 records in 00:06:11.992 256+0 records out 00:06:11.992 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.227873 s, 4.6 MB/s 00:06:11.992 12:18:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.992 12:18:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:11.992 256+0 records in 00:06:11.992 256+0 records out 00:06:11.992 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.229382 s, 4.6 MB/s 00:06:12.252 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:12.252 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:12.252 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.252 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.252 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:12.252 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.252 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.252 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.252 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:12.252 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.252 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:12.252 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.252 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:12.252 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.252 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:12.252 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.252 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:12.252 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.252 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:12.252 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:12.252 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:12.252 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.252 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:12.252 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.253 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:12.253 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.253 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.253 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.253 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.253 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.253 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.253 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.253 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.253 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:12.253 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.253 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.253 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.512 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.512 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.512 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.512 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.512 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.512 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.512 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:12.512 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.512 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.512 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:12.770 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:12.770 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:12.770 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:12.770 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.770 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.770 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:12.770 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:12.770 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.770 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.770 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:13.028 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:13.028 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:13.028 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:13.028 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.028 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.028 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:13.028 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:13.028 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.028 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.028 12:18:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:13.286 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:13.286 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:13.286 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:13.286 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.286 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.286 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:13.286 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:13.286 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.286 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.286 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:13.286 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:13.286 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:13.286 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:13.286 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.544 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.544 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:13.544 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:13.544 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.545 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.545 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.545 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.545 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.545 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.545 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.545 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.545 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.545 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.545 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:13.545 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.545 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.545 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.545 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.545 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.545 12:18:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:13.545 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.545 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:13.545 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:13.803 malloc_lvol_verify 00:06:13.803 12:18:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:14.060 469b681a-7425-45b0-aeee-fb2c90b9e4f9 00:06:14.060 12:18:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:14.321 b8cd6a40-812f-4a53-b16b-122672ad065f 00:06:14.321 12:18:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:14.580 /dev/nbd0 00:06:14.580 12:18:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:14.580 12:18:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:14.580 12:18:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:14.580 12:18:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:14.580 12:18:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:14.580 mke2fs 1.47.0 (5-Feb-2023) 00:06:14.580 Discarding device blocks: 0/4096 done 00:06:14.580 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:14.580 00:06:14.580 Allocating group tables: 0/1 done 00:06:14.580 Writing inode tables: 0/1 done 00:06:14.580 Creating journal (1024 blocks): done 00:06:14.580 Writing superblocks and filesystem accounting information: 0/1 done 00:06:14.580 00:06:14.580 12:18:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:14.580 12:18:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.580 12:18:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:14.580 12:18:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.580 12:18:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:14.580 12:18:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.580 12:18:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.581 12:18:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.581 12:18:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.581 12:18:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.581 12:18:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.581 12:18:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.581 12:18:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.581 12:18:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:14.581 12:18:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.581 12:18:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61755 00:06:14.581 12:18:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61755 ']' 00:06:14.581 12:18:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61755 00:06:14.581 12:18:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:14.581 12:18:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.581 12:18:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61755 00:06:14.839 12:18:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.839 12:18:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.839 killing process with pid 61755 00:06:14.839 12:18:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61755' 00:06:14.839 12:18:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61755 00:06:14.839 12:18:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61755 00:06:15.408 12:18:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:15.408 00:06:15.408 real 0m10.238s 00:06:15.408 user 0m14.143s 00:06:15.408 sys 0m3.255s 00:06:15.408 ************************************ 00:06:15.408 END TEST bdev_nbd 00:06:15.408 ************************************ 00:06:15.408 12:18:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.408 12:18:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:15.408 12:18:22 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:15.408 12:18:22 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:15.408 skipping fio tests on NVMe due to multi-ns failures. 00:06:15.408 12:18:22 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:15.408 12:18:22 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:15.408 12:18:22 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:15.408 12:18:22 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:15.408 12:18:22 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.408 12:18:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:15.408 ************************************ 00:06:15.408 START TEST bdev_verify 00:06:15.408 ************************************ 00:06:15.408 12:18:22 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:15.408 [2024-12-16 12:18:22.419519] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:15.408 [2024-12-16 12:18:22.419602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62135 ] 00:06:15.667 [2024-12-16 12:18:22.563411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.667 [2024-12-16 12:18:22.639447] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.667 [2024-12-16 12:18:22.639530] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.233 Running I/O for 5 seconds... 00:06:18.547 25984.00 IOPS, 101.50 MiB/s [2024-12-16T12:18:26.637Z] 21888.00 IOPS, 85.50 MiB/s [2024-12-16T12:18:27.579Z] 20565.33 IOPS, 80.33 MiB/s [2024-12-16T12:18:28.521Z] 20544.00 IOPS, 80.25 MiB/s [2024-12-16T12:18:28.521Z] 20172.80 IOPS, 78.80 MiB/s 00:06:21.415 Latency(us) 00:06:21.415 [2024-12-16T12:18:28.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:21.415 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:21.415 Verification LBA range: start 0x0 length 0xbd0bd 00:06:21.415 Nvme0n1 : 5.07 1743.64 6.81 0.00 0.00 73259.24 10435.35 69770.63 00:06:21.415 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:21.415 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:21.415 Nvme0n1 : 5.06 1594.58 6.23 0.00 0.00 80050.67 12048.54 77836.60 00:06:21.415 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:21.415 Verification LBA range: start 0x0 length 0xa0000 00:06:21.415 Nvme1n1 : 5.07 1743.18 6.81 0.00 0.00 73191.91 13107.20 66544.25 00:06:21.415 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:21.415 Verification LBA range: start 0xa0000 length 0xa0000 00:06:21.415 Nvme1n1 : 5.06 1593.64 6.23 0.00 0.00 79903.83 15325.34 71787.13 00:06:21.415 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:21.415 Verification LBA range: start 0x0 length 0x80000 00:06:21.415 Nvme2n1 : 5.07 1742.70 6.81 0.00 0.00 73137.38 13006.38 67350.84 00:06:21.415 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:21.415 Verification LBA range: start 0x80000 length 0x80000 00:06:21.415 Nvme2n1 : 5.06 1593.23 6.22 0.00 0.00 79779.68 15426.17 70173.93 00:06:21.415 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:21.415 Verification LBA range: start 0x0 length 0x80000 00:06:21.415 Nvme2n2 : 5.07 1741.65 6.80 0.00 0.00 73064.03 13510.50 65334.35 00:06:21.415 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:21.415 Verification LBA range: start 0x80000 length 0x80000 00:06:21.415 Nvme2n2 : 5.06 1592.74 6.22 0.00 0.00 79650.42 13712.15 73400.32 00:06:21.415 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:21.416 Verification LBA range: start 0x0 length 0x80000 00:06:21.416 Nvme2n3 : 5.07 1741.20 6.80 0.00 0.00 72966.76 12401.43 68157.44 00:06:21.416 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:21.416 Verification LBA range: start 0x80000 length 0x80000 00:06:21.416 Nvme2n3 : 5.07 1602.16 6.26 0.00 0.00 79100.07 4814.38 75416.81 00:06:21.416 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:21.416 Verification LBA range: start 0x0 length 0x20000 00:06:21.416 Nvme3n1 : 5.07 1740.70 6.80 0.00 0.00 72873.80 5747.00 67754.14 00:06:21.416 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:21.416 Verification LBA range: start 0x20000 length 0x20000 00:06:21.416 Nvme3n1 : 5.08 1601.39 6.26 0.00 0.00 79064.09 6175.51 77836.60 00:06:21.416 [2024-12-16T12:18:28.522Z] =================================================================================================================== 00:06:21.416 [2024-12-16T12:18:28.522Z] Total : 20030.83 78.25 0.00 0.00 76192.53 4814.38 77836.60 00:06:22.356 00:06:22.356 real 0m7.043s 00:06:22.356 user 0m13.233s 00:06:22.356 sys 0m0.193s 00:06:22.356 12:18:29 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.356 ************************************ 00:06:22.356 END TEST bdev_verify 00:06:22.356 ************************************ 00:06:22.356 12:18:29 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:22.356 12:18:29 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:22.356 12:18:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:22.616 12:18:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.616 12:18:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.616 ************************************ 00:06:22.616 START TEST bdev_verify_big_io 00:06:22.616 ************************************ 00:06:22.616 12:18:29 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:22.616 [2024-12-16 12:18:29.536191] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:22.616 [2024-12-16 12:18:29.536307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62233 ] 00:06:22.616 [2024-12-16 12:18:29.695671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.875 [2024-12-16 12:18:29.798538] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.875 [2024-12-16 12:18:29.798632] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.444 Running I/O for 5 seconds... 00:06:28.751 851.00 IOPS, 53.19 MiB/s [2024-12-16T12:18:36.795Z] 2080.50 IOPS, 130.03 MiB/s [2024-12-16T12:18:36.795Z] 2702.33 IOPS, 168.90 MiB/s 00:06:29.689 Latency(us) 00:06:29.689 [2024-12-16T12:18:36.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:29.689 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:29.689 Verification LBA range: start 0x0 length 0xbd0b 00:06:29.689 Nvme0n1 : 5.57 115.00 7.19 0.00 0.00 1067073.14 16736.89 1206669.00 00:06:29.689 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:29.689 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:29.689 Nvme0n1 : 5.68 112.69 7.04 0.00 0.00 1096276.91 26617.70 1193763.45 00:06:29.689 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:29.689 Verification LBA range: start 0x0 length 0xa000 00:06:29.689 Nvme1n1 : 5.72 115.83 7.24 0.00 0.00 1012704.06 101631.21 993727.41 00:06:29.689 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:29.689 Verification LBA range: start 0xa000 length 0xa000 00:06:29.689 Nvme1n1 : 5.68 112.65 7.04 0.00 0.00 1055958.57 86305.87 1006632.96 00:06:29.689 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:29.689 Verification LBA range: start 0x0 length 0x8000 00:06:29.689 Nvme2n1 : 5.97 116.18 7.26 0.00 0.00 981742.52 73803.62 1806777.11 00:06:29.689 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:29.689 Verification LBA range: start 0x8000 length 0x8000 00:06:29.689 Nvme2n1 : 5.86 113.64 7.10 0.00 0.00 1001575.98 174224.94 909841.33 00:06:29.689 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:29.689 Verification LBA range: start 0x0 length 0x8000 00:06:29.689 Nvme2n2 : 5.97 120.46 7.53 0.00 0.00 919969.79 67350.84 1845493.76 00:06:29.689 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:29.689 Verification LBA range: start 0x8000 length 0x8000 00:06:29.689 Nvme2n2 : 6.00 116.41 7.28 0.00 0.00 961448.51 42951.29 1948738.17 00:06:29.689 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:29.689 Verification LBA range: start 0x0 length 0x8000 00:06:29.689 Nvme2n3 : 6.06 129.69 8.11 0.00 0.00 822521.57 31255.63 1871304.86 00:06:29.689 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:29.689 Verification LBA range: start 0x8000 length 0x8000 00:06:29.689 Nvme2n3 : 6.00 128.04 8.00 0.00 0.00 846128.71 36498.51 1109877.37 00:06:29.689 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:29.689 Verification LBA range: start 0x0 length 0x2000 00:06:29.689 Nvme3n1 : 6.08 150.62 9.41 0.00 0.00 687971.13 475.77 1897115.96 00:06:29.689 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:29.689 Verification LBA range: start 0x2000 length 0x2000 00:06:29.689 Nvme3n1 : 6.06 143.94 9.00 0.00 0.00 728781.89 721.53 1135688.47 00:06:29.689 [2024-12-16T12:18:36.795Z] =================================================================================================================== 00:06:29.689 [2024-12-16T12:18:36.795Z] Total : 1475.17 92.20 0.00 0.00 916404.86 475.77 1948738.17 00:06:31.064 00:06:31.064 real 0m8.560s 00:06:31.064 user 0m16.191s 00:06:31.064 sys 0m0.226s 00:06:31.064 12:18:38 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.064 ************************************ 00:06:31.064 END TEST bdev_verify_big_io 00:06:31.064 ************************************ 00:06:31.064 12:18:38 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:31.064 12:18:38 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:31.064 12:18:38 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:31.064 12:18:38 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.064 12:18:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.064 ************************************ 00:06:31.064 START TEST bdev_write_zeroes 00:06:31.064 ************************************ 00:06:31.064 12:18:38 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:31.064 [2024-12-16 12:18:38.153710] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:31.064 [2024-12-16 12:18:38.153824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62342 ] 00:06:31.325 [2024-12-16 12:18:38.314256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.325 [2024-12-16 12:18:38.411291] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.894 Running I/O for 1 seconds... 00:06:33.275 69504.00 IOPS, 271.50 MiB/s 00:06:33.275 Latency(us) 00:06:33.275 [2024-12-16T12:18:40.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:33.275 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:33.275 Nvme0n1 : 1.02 11540.75 45.08 0.00 0.00 11069.04 4965.61 23592.96 00:06:33.275 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:33.275 Nvme1n1 : 1.02 11527.75 45.03 0.00 0.00 11068.47 8570.09 21173.17 00:06:33.275 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:33.275 Nvme2n1 : 1.02 11513.99 44.98 0.00 0.00 11040.84 8418.86 20064.10 00:06:33.275 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:33.275 Nvme2n2 : 1.02 11501.03 44.93 0.00 0.00 11029.88 8015.56 19459.15 00:06:33.275 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:33.275 Nvme2n3 : 1.03 11488.14 44.88 0.00 0.00 11024.10 8116.38 20064.10 00:06:33.275 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:33.275 Nvme3n1 : 1.03 11475.26 44.83 0.00 0.00 11018.57 8116.38 21677.29 00:06:33.275 [2024-12-16T12:18:40.381Z] =================================================================================================================== 00:06:33.275 [2024-12-16T12:18:40.381Z] Total : 69046.92 269.71 0.00 0.00 11041.82 4965.61 23592.96 00:06:33.847 00:06:33.847 real 0m2.670s 00:06:33.847 user 0m2.365s 00:06:33.847 sys 0m0.191s 00:06:33.847 12:18:40 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.847 ************************************ 00:06:33.847 END TEST bdev_write_zeroes 00:06:33.847 ************************************ 00:06:33.847 12:18:40 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:33.847 12:18:40 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:33.847 12:18:40 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:33.847 12:18:40 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.847 12:18:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.847 ************************************ 00:06:33.847 START TEST bdev_json_nonenclosed 00:06:33.847 ************************************ 00:06:33.847 12:18:40 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:33.847 [2024-12-16 12:18:40.884530] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:33.847 [2024-12-16 12:18:40.884643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62397 ] 00:06:34.107 [2024-12-16 12:18:41.045559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.107 [2024-12-16 12:18:41.139773] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.107 [2024-12-16 12:18:41.139847] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:34.107 [2024-12-16 12:18:41.139874] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:34.107 [2024-12-16 12:18:41.139884] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.369 00:06:34.369 real 0m0.493s 00:06:34.369 user 0m0.296s 00:06:34.369 sys 0m0.092s 00:06:34.369 12:18:41 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.369 ************************************ 00:06:34.369 END TEST bdev_json_nonenclosed 00:06:34.369 ************************************ 00:06:34.369 12:18:41 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:34.369 12:18:41 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:34.369 12:18:41 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:34.369 12:18:41 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.369 12:18:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:34.369 ************************************ 00:06:34.369 START TEST bdev_json_nonarray 00:06:34.369 ************************************ 00:06:34.369 12:18:41 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:34.369 [2024-12-16 12:18:41.435889] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:34.369 [2024-12-16 12:18:41.435998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62417 ] 00:06:34.630 [2024-12-16 12:18:41.594044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.630 [2024-12-16 12:18:41.688719] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.630 [2024-12-16 12:18:41.688802] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:34.630 [2024-12-16 12:18:41.688818] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:34.630 [2024-12-16 12:18:41.688828] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.891 00:06:34.891 real 0m0.490s 00:06:34.891 user 0m0.293s 00:06:34.891 sys 0m0.093s 00:06:34.891 ************************************ 00:06:34.891 END TEST bdev_json_nonarray 00:06:34.891 ************************************ 00:06:34.891 12:18:41 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.891 12:18:41 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:34.891 12:18:41 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:06:34.891 12:18:41 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:06:34.891 12:18:41 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:06:34.891 12:18:41 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:34.891 12:18:41 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:06:34.891 12:18:41 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:34.891 12:18:41 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:34.891 12:18:41 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:34.891 12:18:41 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:34.891 12:18:41 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:34.891 12:18:41 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:34.891 00:06:34.891 real 0m36.605s 00:06:34.891 user 0m56.248s 00:06:34.891 sys 0m5.179s 00:06:34.891 12:18:41 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.891 12:18:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:34.891 ************************************ 00:06:34.891 END TEST blockdev_nvme 00:06:34.891 ************************************ 00:06:34.891 12:18:41 -- spdk/autotest.sh@209 -- # uname -s 00:06:34.891 12:18:41 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:34.892 12:18:41 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:34.892 12:18:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:34.892 12:18:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.892 12:18:41 -- common/autotest_common.sh@10 -- # set +x 00:06:34.892 ************************************ 00:06:34.892 START TEST blockdev_nvme_gpt 00:06:34.892 ************************************ 00:06:34.892 12:18:41 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:35.153 * Looking for test storage... 00:06:35.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:35.153 12:18:42 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:35.153 12:18:42 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:06:35.153 12:18:42 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:35.153 12:18:42 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.153 12:18:42 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:35.153 12:18:42 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.154 12:18:42 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:35.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.154 --rc genhtml_branch_coverage=1 00:06:35.154 --rc genhtml_function_coverage=1 00:06:35.154 --rc genhtml_legend=1 00:06:35.154 --rc geninfo_all_blocks=1 00:06:35.154 --rc geninfo_unexecuted_blocks=1 00:06:35.154 00:06:35.154 ' 00:06:35.154 12:18:42 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:35.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.154 --rc genhtml_branch_coverage=1 00:06:35.154 --rc genhtml_function_coverage=1 00:06:35.154 --rc genhtml_legend=1 00:06:35.154 --rc geninfo_all_blocks=1 00:06:35.154 --rc geninfo_unexecuted_blocks=1 00:06:35.154 00:06:35.154 ' 00:06:35.154 12:18:42 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:35.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.154 --rc genhtml_branch_coverage=1 00:06:35.154 --rc genhtml_function_coverage=1 00:06:35.154 --rc genhtml_legend=1 00:06:35.154 --rc geninfo_all_blocks=1 00:06:35.154 --rc geninfo_unexecuted_blocks=1 00:06:35.154 00:06:35.154 ' 00:06:35.154 12:18:42 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:35.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.154 --rc genhtml_branch_coverage=1 00:06:35.154 --rc genhtml_function_coverage=1 00:06:35.154 --rc genhtml_legend=1 00:06:35.154 --rc geninfo_all_blocks=1 00:06:35.154 --rc geninfo_unexecuted_blocks=1 00:06:35.154 00:06:35.154 ' 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62501 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62501 00:06:35.154 12:18:42 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62501 ']' 00:06:35.154 12:18:42 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.154 12:18:42 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.154 12:18:42 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.154 12:18:42 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.154 12:18:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:35.154 12:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:35.154 [2024-12-16 12:18:42.204832] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:35.154 [2024-12-16 12:18:42.204953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62501 ] 00:06:35.415 [2024-12-16 12:18:42.362501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.415 [2024-12-16 12:18:42.460514] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.987 12:18:43 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.987 12:18:43 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:35.987 12:18:43 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:35.988 12:18:43 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:06:35.988 12:18:43 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:36.249 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:36.522 Waiting for block devices as requested 00:06:36.522 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:36.522 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:36.783 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:36.783 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:42.061 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:42.061 12:18:48 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:42.061 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:42.061 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:42.061 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:42.061 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:42.061 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:42.061 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:42.061 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:42.061 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:42.061 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:42.061 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:42.061 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:42.061 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:42.062 12:18:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:42.062 12:18:48 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:42.062 12:18:48 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:42.062 12:18:48 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:42.062 12:18:48 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:42.062 12:18:48 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:42.062 12:18:48 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:42.062 12:18:48 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:42.062 12:18:48 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:42.062 BYT; 00:06:42.062 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:42.062 12:18:48 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:42.062 BYT; 00:06:42.062 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:42.062 12:18:48 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:42.062 12:18:48 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:42.062 12:18:48 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:42.062 12:18:48 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:42.062 12:18:48 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:42.062 12:18:48 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:42.062 12:18:48 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:42.062 12:18:48 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:42.062 12:18:48 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:42.062 12:18:48 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:42.062 12:18:48 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:42.062 12:18:48 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:42.062 12:18:48 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:42.062 12:18:48 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:42.062 12:18:48 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:42.062 12:18:48 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:42.062 12:18:48 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:42.062 12:18:48 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:42.062 12:18:48 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:42.062 12:18:48 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:42.062 12:18:48 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:42.062 12:18:48 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:42.062 12:18:48 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:42.062 12:18:48 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:42.062 12:18:48 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:42.062 12:18:48 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:42.062 12:18:48 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:42.062 12:18:48 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:42.062 12:18:48 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:42.995 The operation has completed successfully. 00:06:42.995 12:18:49 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:43.928 The operation has completed successfully. 00:06:43.928 12:18:50 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:44.494 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:44.753 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:44.753 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:44.753 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:44.753 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:45.011 12:18:51 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:45.011 12:18:51 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.011 12:18:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:45.011 [] 00:06:45.011 12:18:51 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.011 12:18:51 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:45.011 12:18:51 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:45.011 12:18:51 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:45.011 12:18:51 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:45.011 12:18:51 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:45.012 12:18:51 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.012 12:18:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:45.298 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.298 12:18:52 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:45.298 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.298 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:45.298 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.298 12:18:52 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:06:45.298 12:18:52 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:45.298 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.298 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:45.298 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.298 12:18:52 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:45.298 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.298 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:45.298 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.298 12:18:52 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:45.298 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.298 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:45.298 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.298 12:18:52 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:45.298 12:18:52 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:45.298 12:18:52 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:45.298 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.298 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:45.298 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.298 12:18:52 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:45.298 12:18:52 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:45.299 12:18:52 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "6a25d148-bcba-4000-b5cd-31e18983577b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6a25d148-bcba-4000-b5cd-31e18983577b",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a0160ade-d0c1-4105-af65-7cbed2482eb9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a0160ade-d0c1-4105-af65-7cbed2482eb9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "6c3906c4-c81c-49fd-895a-6bdc527ea6e3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6c3906c4-c81c-49fd-895a-6bdc527ea6e3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "36f95d4e-c13d-4ce1-b839-6c7d12b8e866"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "36f95d4e-c13d-4ce1-b839-6c7d12b8e866",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "043cd037-c68c-403b-937f-03f9882032dc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "043cd037-c68c-403b-937f-03f9882032dc",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:45.299 12:18:52 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:45.299 12:18:52 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:45.299 12:18:52 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:45.299 12:18:52 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62501 00:06:45.299 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62501 ']' 00:06:45.299 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62501 00:06:45.299 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:45.299 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.299 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62501 00:06:45.557 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.557 killing process with pid 62501 00:06:45.557 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.557 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62501' 00:06:45.557 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62501 00:06:45.557 12:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62501 00:06:46.491 12:18:53 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:46.491 12:18:53 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:46.491 12:18:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:46.491 12:18:53 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.491 12:18:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:46.491 ************************************ 00:06:46.491 START TEST bdev_hello_world 00:06:46.491 ************************************ 00:06:46.491 12:18:53 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:46.749 [2024-12-16 12:18:53.631685] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:46.749 [2024-12-16 12:18:53.631803] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63118 ] 00:06:46.749 [2024-12-16 12:18:53.786137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.008 [2024-12-16 12:18:53.862850] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.266 [2024-12-16 12:18:54.353499] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:47.266 [2024-12-16 12:18:54.353540] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:47.266 [2024-12-16 12:18:54.353556] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:47.266 [2024-12-16 12:18:54.355452] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:47.266 [2024-12-16 12:18:54.355958] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:47.266 [2024-12-16 12:18:54.355983] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:47.266 [2024-12-16 12:18:54.356169] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:47.266 00:06:47.266 [2024-12-16 12:18:54.356188] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:47.833 00:06:47.833 real 0m1.335s 00:06:47.833 user 0m1.075s 00:06:47.833 sys 0m0.155s 00:06:47.833 12:18:54 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.833 12:18:54 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:47.833 ************************************ 00:06:47.833 END TEST bdev_hello_world 00:06:47.833 ************************************ 00:06:48.092 12:18:54 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:48.092 12:18:54 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:48.092 12:18:54 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.092 12:18:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:48.092 ************************************ 00:06:48.092 START TEST bdev_bounds 00:06:48.092 ************************************ 00:06:48.092 12:18:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:48.092 12:18:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63155 00:06:48.092 12:18:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:48.092 Process bdevio pid: 63155 00:06:48.092 12:18:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63155' 00:06:48.092 12:18:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63155 00:06:48.092 12:18:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63155 ']' 00:06:48.092 12:18:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.092 12:18:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:48.092 12:18:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.092 12:18:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.092 12:18:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.092 12:18:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:48.093 [2024-12-16 12:18:55.001223] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:48.093 [2024-12-16 12:18:55.001317] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63155 ] 00:06:48.093 [2024-12-16 12:18:55.152966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.351 [2024-12-16 12:18:55.250228] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.351 [2024-12-16 12:18:55.250294] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.351 [2024-12-16 12:18:55.250462] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.918 12:18:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.918 12:18:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:48.918 12:18:55 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:48.918 I/O targets: 00:06:48.918 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:48.918 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:48.918 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:48.918 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:48.918 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:48.918 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:48.918 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:48.918 00:06:48.918 00:06:48.918 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.918 http://cunit.sourceforge.net/ 00:06:48.918 00:06:48.918 00:06:48.918 Suite: bdevio tests on: Nvme3n1 00:06:48.918 Test: blockdev write read block ...passed 00:06:48.918 Test: blockdev write zeroes read block ...passed 00:06:48.918 Test: blockdev write zeroes read no split ...passed 00:06:48.918 Test: blockdev write zeroes read split ...passed 00:06:48.918 Test: blockdev write zeroes read split partial ...passed 00:06:48.918 Test: blockdev reset ...[2024-12-16 12:18:55.949038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:48.918 [2024-12-16 12:18:55.951914] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:48.918 passed 00:06:48.918 Test: blockdev write read 8 blocks ...passed 00:06:48.918 Test: blockdev write read size > 128k ...passed 00:06:48.918 Test: blockdev write read invalid size ...passed 00:06:48.918 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:48.918 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:48.918 Test: blockdev write read max offset ...passed 00:06:48.918 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:48.918 Test: blockdev writev readv 8 blocks ...passed 00:06:48.918 Test: blockdev writev readv 30 x 1block ...passed 00:06:48.918 Test: blockdev writev readv block ...passed 00:06:48.918 Test: blockdev writev readv size > 128k ...passed 00:06:48.918 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:48.918 Test: blockdev comparev and writev ...[2024-12-16 12:18:55.958774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ba604000 len:0x1000 00:06:48.918 [2024-12-16 12:18:55.958881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:48.918 passed 00:06:48.918 Test: blockdev nvme passthru rw ...passed 00:06:48.918 Test: blockdev nvme passthru vendor specific ...[2024-12-16 12:18:55.959479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:48.918 [2024-12-16 12:18:55.959552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:48.918 passed 00:06:48.918 Test: blockdev nvme admin passthru ...passed 00:06:48.918 Test: blockdev copy ...passed 00:06:48.918 Suite: bdevio tests on: Nvme2n3 00:06:48.918 Test: blockdev write read block ...passed 00:06:48.918 Test: blockdev write zeroes read block ...passed 00:06:48.918 Test: blockdev write zeroes read no split ...passed 00:06:48.918 Test: blockdev write zeroes read split ...passed 00:06:48.918 Test: blockdev write zeroes read split partial ...passed 00:06:48.918 Test: blockdev reset ...[2024-12-16 12:18:56.019099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:49.177 [2024-12-16 12:18:56.022021] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:49.177 passed 00:06:49.177 Test: blockdev write read 8 blocks ...passed 00:06:49.177 Test: blockdev write read size > 128k ...passed 00:06:49.177 Test: blockdev write read invalid size ...passed 00:06:49.177 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:49.177 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:49.177 Test: blockdev write read max offset ...passed 00:06:49.177 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:49.177 Test: blockdev writev readv 8 blocks ...passed 00:06:49.177 Test: blockdev writev readv 30 x 1block ...passed 00:06:49.177 Test: blockdev writev readv block ...passed 00:06:49.177 Test: blockdev writev readv size > 128k ...passed 00:06:49.177 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:49.177 Test: blockdev comparev and writev ...[2024-12-16 12:18:56.028869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ba602000 len:0x1000 00:06:49.177 [2024-12-16 12:18:56.028980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:49.177 passed 00:06:49.177 Test: blockdev nvme passthru rw ...passed 00:06:49.177 Test: blockdev nvme passthru vendor specific ...[2024-12-16 12:18:56.029788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:49.177 passed 00:06:49.177 Test: blockdev nvme admin passthru ...[2024-12-16 12:18:56.029932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:49.177 passed 00:06:49.177 Test: blockdev copy ...passed 00:06:49.177 Suite: bdevio tests on: Nvme2n2 00:06:49.177 Test: blockdev write read block ...passed 00:06:49.177 Test: blockdev write zeroes read block ...passed 00:06:49.177 Test: blockdev write zeroes read no split ...passed 00:06:49.177 Test: blockdev write zeroes read split ...passed 00:06:49.177 Test: blockdev write zeroes read split partial ...passed 00:06:49.177 Test: blockdev reset ...[2024-12-16 12:18:56.086543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:49.177 [2024-12-16 12:18:56.091659] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:49.177 passed 00:06:49.177 Test: blockdev write read 8 blocks ...passed 00:06:49.177 Test: blockdev write read size > 128k ...passed 00:06:49.177 Test: blockdev write read invalid size ...passed 00:06:49.177 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:49.177 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:49.177 Test: blockdev write read max offset ...passed 00:06:49.177 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:49.177 Test: blockdev writev readv 8 blocks ...passed 00:06:49.177 Test: blockdev writev readv 30 x 1block ...passed 00:06:49.177 Test: blockdev writev readv block ...passed 00:06:49.177 Test: blockdev writev readv size > 128k ...passed 00:06:49.177 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:49.177 Test: blockdev comparev and writev ...[2024-12-16 12:18:56.098225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2caa38000 len:0x1000 00:06:49.177 [2024-12-16 12:18:56.098321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:49.177 passed 00:06:49.177 Test: blockdev nvme passthru rw ...passed 00:06:49.177 Test: blockdev nvme passthru vendor specific ...[2024-12-16 12:18:56.098932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:49.177 [2024-12-16 12:18:56.099001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:49.177 passed 00:06:49.177 Test: blockdev nvme admin passthru ...passed 00:06:49.177 Test: blockdev copy ...passed 00:06:49.177 Suite: bdevio tests on: Nvme2n1 00:06:49.177 Test: blockdev write read block ...passed 00:06:49.177 Test: blockdev write zeroes read block ...passed 00:06:49.177 Test: blockdev write zeroes read no split ...passed 00:06:49.177 Test: blockdev write zeroes read split ...passed 00:06:49.177 Test: blockdev write zeroes read split partial ...passed 00:06:49.177 Test: blockdev reset ...[2024-12-16 12:18:56.153412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:49.177 [2024-12-16 12:18:56.156404] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:49.177 passed 00:06:49.177 Test: blockdev write read 8 blocks ...passed 00:06:49.177 Test: blockdev write read size > 128k ...passed 00:06:49.177 Test: blockdev write read invalid size ...passed 00:06:49.177 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:49.177 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:49.177 Test: blockdev write read max offset ...passed 00:06:49.177 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:49.177 Test: blockdev writev readv 8 blocks ...passed 00:06:49.177 Test: blockdev writev readv 30 x 1block ...passed 00:06:49.177 Test: blockdev writev readv block ...passed 00:06:49.177 Test: blockdev writev readv size > 128k ...passed 00:06:49.177 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:49.177 Test: blockdev comparev and writev ...[2024-12-16 12:18:56.162190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2caa34000 len:0x1000 00:06:49.177 [2024-12-16 12:18:56.162285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:49.177 passed 00:06:49.177 Test: blockdev nvme passthru rw ...passed 00:06:49.177 Test: blockdev nvme passthru vendor specific ...[2024-12-16 12:18:56.162798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:49.177 [2024-12-16 12:18:56.162817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:49.177 passed 00:06:49.177 Test: blockdev nvme admin passthru ...passed 00:06:49.177 Test: blockdev copy ...passed 00:06:49.177 Suite: bdevio tests on: Nvme1n1p2 00:06:49.177 Test: blockdev write read block ...passed 00:06:49.177 Test: blockdev write zeroes read block ...passed 00:06:49.177 Test: blockdev write zeroes read no split ...passed 00:06:49.177 Test: blockdev write zeroes read split ...passed 00:06:49.177 Test: blockdev write zeroes read split partial ...passed 00:06:49.177 Test: blockdev reset ...[2024-12-16 12:18:56.206926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:49.177 [2024-12-16 12:18:56.209391] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:49.177 passed 00:06:49.177 Test: blockdev write read 8 blocks ...passed 00:06:49.177 Test: blockdev write read size > 128k ...passed 00:06:49.177 Test: blockdev write read invalid size ...passed 00:06:49.177 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:49.177 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:49.177 Test: blockdev write read max offset ...passed 00:06:49.177 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:49.177 Test: blockdev writev readv 8 blocks ...passed 00:06:49.177 Test: blockdev writev readv 30 x 1block ...passed 00:06:49.177 Test: blockdev writev readv block ...passed 00:06:49.177 Test: blockdev writev readv size > 128k ...passed 00:06:49.177 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:49.177 Test: blockdev comparev and writev ...[2024-12-16 12:18:56.216004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2caa30000 len:0x1000 00:06:49.177 [2024-12-16 12:18:56.216039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:49.177 passed 00:06:49.177 Test: blockdev nvme passthru rw ...passed 00:06:49.177 Test: blockdev nvme passthru vendor specific ...passed 00:06:49.177 Test: blockdev nvme admin passthru ...passed 00:06:49.177 Test: blockdev copy ...passed 00:06:49.177 Suite: bdevio tests on: Nvme1n1p1 00:06:49.177 Test: blockdev write read block ...passed 00:06:49.177 Test: blockdev write zeroes read block ...passed 00:06:49.177 Test: blockdev write zeroes read no split ...passed 00:06:49.177 Test: blockdev write zeroes read split ...passed 00:06:49.177 Test: blockdev write zeroes read split partial ...passed 00:06:49.177 Test: blockdev reset ...[2024-12-16 12:18:56.257562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:49.178 [2024-12-16 12:18:56.260145] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:49.178 passed 00:06:49.178 Test: blockdev write read 8 blocks ...passed 00:06:49.178 Test: blockdev write read size > 128k ...passed 00:06:49.178 Test: blockdev write read invalid size ...passed 00:06:49.178 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:49.178 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:49.178 Test: blockdev write read max offset ...passed 00:06:49.178 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:49.178 Test: blockdev writev readv 8 blocks ...passed 00:06:49.178 Test: blockdev writev readv 30 x 1block ...passed 00:06:49.178 Test: blockdev writev readv block ...passed 00:06:49.178 Test: blockdev writev readv size > 128k ...passed 00:06:49.178 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:49.178 Test: blockdev comparev and writev ...[2024-12-16 12:18:56.266922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2ba80e000 len:0x1000 00:06:49.178 [2024-12-16 12:18:56.266957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:49.178 passed 00:06:49.178 Test: blockdev nvme passthru rw ...passed 00:06:49.178 Test: blockdev nvme passthru vendor specific ...passed 00:06:49.178 Test: blockdev nvme admin passthru ...passed 00:06:49.178 Test: blockdev copy ...passed 00:06:49.178 Suite: bdevio tests on: Nvme0n1 00:06:49.178 Test: blockdev write read block ...passed 00:06:49.178 Test: blockdev write zeroes read block ...passed 00:06:49.178 Test: blockdev write zeroes read no split ...passed 00:06:49.436 Test: blockdev write zeroes read split ...passed 00:06:49.436 Test: blockdev write zeroes read split partial ...passed 00:06:49.436 Test: blockdev reset ...[2024-12-16 12:18:56.310041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:49.436 [2024-12-16 12:18:56.312702] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:49.436 passed 00:06:49.436 Test: blockdev write read 8 blocks ...passed 00:06:49.436 Test: blockdev write read size > 128k ...passed 00:06:49.436 Test: blockdev write read invalid size ...passed 00:06:49.436 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:49.436 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:49.436 Test: blockdev write read max offset ...passed 00:06:49.436 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:49.436 Test: blockdev writev readv 8 blocks ...passed 00:06:49.436 Test: blockdev writev readv 30 x 1block ...passed 00:06:49.436 Test: blockdev writev readv block ...passed 00:06:49.436 Test: blockdev writev readv size > 128k ...passed 00:06:49.436 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:49.436 Test: blockdev comparev and writev ...[2024-12-16 12:18:56.318268] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:49.436 separate metadata which is not supported yet. 00:06:49.436 passed 00:06:49.436 Test: blockdev nvme passthru rw ...passed 00:06:49.436 Test: blockdev nvme passthru vendor specific ...[2024-12-16 12:18:56.318746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:49.436 [2024-12-16 12:18:56.318777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:49.436 passed 00:06:49.436 Test: blockdev nvme admin passthru ...passed 00:06:49.436 Test: blockdev copy ...passed 00:06:49.436 00:06:49.436 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.436 suites 7 7 n/a 0 0 00:06:49.436 tests 161 161 161 0 0 00:06:49.436 asserts 1025 1025 1025 0 n/a 00:06:49.436 00:06:49.436 Elapsed time = 1.110 seconds 00:06:49.436 0 00:06:49.436 12:18:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63155 00:06:49.436 12:18:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63155 ']' 00:06:49.437 12:18:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63155 00:06:49.437 12:18:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:49.437 12:18:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.437 12:18:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63155 00:06:49.437 12:18:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.437 12:18:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.437 killing process with pid 63155 00:06:49.437 12:18:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63155' 00:06:49.437 12:18:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63155 00:06:49.437 12:18:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63155 00:06:50.004 ************************************ 00:06:50.004 END TEST bdev_bounds 00:06:50.004 ************************************ 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:50.004 00:06:50.004 real 0m2.056s 00:06:50.004 user 0m5.295s 00:06:50.004 sys 0m0.256s 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:50.004 12:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:50.004 12:18:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:50.004 12:18:57 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.004 12:18:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:50.004 ************************************ 00:06:50.004 START TEST bdev_nbd 00:06:50.004 ************************************ 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63209 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63209 /var/tmp/spdk-nbd.sock 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63209 ']' 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:50.004 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:50.004 [2024-12-16 12:18:57.100133] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:50.004 [2024-12-16 12:18:57.100233] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.262 [2024-12-16 12:18:57.250200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.262 [2024-12-16 12:18:57.327959] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.197 12:18:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.197 12:18:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:51.197 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:51.197 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.197 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:51.197 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:51.197 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:51.197 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.197 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:51.197 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:51.197 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:51.197 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:51.197 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:51.197 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:51.197 12:18:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:51.197 12:18:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:51.197 12:18:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:51.197 12:18:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:51.198 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:51.198 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:51.198 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:51.198 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:51.198 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:51.198 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:51.198 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:51.198 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:51.198 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.198 1+0 records in 00:06:51.198 1+0 records out 00:06:51.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362714 s, 11.3 MB/s 00:06:51.198 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.198 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:51.198 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.198 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:51.198 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:51.198 12:18:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:51.198 12:18:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:51.198 12:18:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:51.456 12:18:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:51.456 12:18:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:51.456 12:18:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:51.456 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:51.456 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:51.456 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:51.456 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:51.456 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:51.456 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:51.456 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:51.456 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:51.456 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.456 1+0 records in 00:06:51.456 1+0 records out 00:06:51.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396923 s, 10.3 MB/s 00:06:51.456 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.456 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:51.456 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.456 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:51.456 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:51.457 12:18:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:51.457 12:18:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:51.457 12:18:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:51.715 12:18:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:51.715 12:18:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:51.715 12:18:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:51.715 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:51.715 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:51.715 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:51.715 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:51.715 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:51.715 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:51.715 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:51.715 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:51.715 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.715 1+0 records in 00:06:51.715 1+0 records out 00:06:51.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039208 s, 10.4 MB/s 00:06:51.715 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.715 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:51.715 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.715 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:51.715 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:51.715 12:18:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:51.715 12:18:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:51.715 12:18:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:51.974 12:18:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:51.974 12:18:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:51.974 12:18:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:51.974 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:51.974 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:51.974 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:51.974 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:51.974 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:51.974 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:51.974 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:51.974 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:51.974 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.974 1+0 records in 00:06:51.974 1+0 records out 00:06:51.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453962 s, 9.0 MB/s 00:06:51.974 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.974 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:51.974 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.974 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:51.974 12:18:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:51.974 12:18:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:51.974 12:18:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:51.974 12:18:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.233 1+0 records in 00:06:52.233 1+0 records out 00:06:52.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478567 s, 8.6 MB/s 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.233 1+0 records in 00:06:52.233 1+0 records out 00:06:52.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421006 s, 9.7 MB/s 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:52.233 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:52.492 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:52.492 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:52.492 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:52.492 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:52.492 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:52.492 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:52.492 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:52.492 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:52.492 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:52.492 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:52.492 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:52.492 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.492 1+0 records in 00:06:52.492 1+0 records out 00:06:52.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510289 s, 8.0 MB/s 00:06:52.492 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.492 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:52.492 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.492 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:52.492 12:18:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:52.492 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:52.492 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:52.492 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.751 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:52.751 { 00:06:52.751 "nbd_device": "/dev/nbd0", 00:06:52.751 "bdev_name": "Nvme0n1" 00:06:52.751 }, 00:06:52.751 { 00:06:52.751 "nbd_device": "/dev/nbd1", 00:06:52.751 "bdev_name": "Nvme1n1p1" 00:06:52.751 }, 00:06:52.751 { 00:06:52.751 "nbd_device": "/dev/nbd2", 00:06:52.751 "bdev_name": "Nvme1n1p2" 00:06:52.751 }, 00:06:52.751 { 00:06:52.751 "nbd_device": "/dev/nbd3", 00:06:52.751 "bdev_name": "Nvme2n1" 00:06:52.751 }, 00:06:52.751 { 00:06:52.751 "nbd_device": "/dev/nbd4", 00:06:52.751 "bdev_name": "Nvme2n2" 00:06:52.751 }, 00:06:52.751 { 00:06:52.751 "nbd_device": "/dev/nbd5", 00:06:52.751 "bdev_name": "Nvme2n3" 00:06:52.751 }, 00:06:52.751 { 00:06:52.751 "nbd_device": "/dev/nbd6", 00:06:52.751 "bdev_name": "Nvme3n1" 00:06:52.751 } 00:06:52.751 ]' 00:06:52.751 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:52.751 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:52.751 { 00:06:52.751 "nbd_device": "/dev/nbd0", 00:06:52.751 "bdev_name": "Nvme0n1" 00:06:52.751 }, 00:06:52.751 { 00:06:52.751 "nbd_device": "/dev/nbd1", 00:06:52.751 "bdev_name": "Nvme1n1p1" 00:06:52.751 }, 00:06:52.751 { 00:06:52.751 "nbd_device": "/dev/nbd2", 00:06:52.751 "bdev_name": "Nvme1n1p2" 00:06:52.751 }, 00:06:52.751 { 00:06:52.751 "nbd_device": "/dev/nbd3", 00:06:52.751 "bdev_name": "Nvme2n1" 00:06:52.751 }, 00:06:52.751 { 00:06:52.751 "nbd_device": "/dev/nbd4", 00:06:52.751 "bdev_name": "Nvme2n2" 00:06:52.751 }, 00:06:52.751 { 00:06:52.751 "nbd_device": "/dev/nbd5", 00:06:52.751 "bdev_name": "Nvme2n3" 00:06:52.751 }, 00:06:52.751 { 00:06:52.751 "nbd_device": "/dev/nbd6", 00:06:52.751 "bdev_name": "Nvme3n1" 00:06:52.751 } 00:06:52.751 ]' 00:06:52.751 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:52.751 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:52.751 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.751 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:52.751 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.751 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:52.751 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.751 12:18:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:53.009 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:53.009 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:53.009 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:53.009 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.009 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.009 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:53.009 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:53.009 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.009 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.009 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.268 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.268 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.268 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.268 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.268 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.268 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.268 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:53.268 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.268 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.268 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:53.526 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:53.526 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:53.526 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:53.526 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.526 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.526 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:53.526 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:53.526 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.526 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.526 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:53.785 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:53.785 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:53.785 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:53.785 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.785 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.785 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:53.785 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:53.785 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.785 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.785 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:53.785 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:53.785 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:53.785 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:53.785 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.785 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.785 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:53.785 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:53.785 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.785 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.785 12:19:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:54.057 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:54.057 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:54.057 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:54.057 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.057 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.057 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:54.057 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.057 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.057 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.057 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:54.356 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:54.356 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:54.356 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:54.356 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.356 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.356 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:54.356 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.356 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.356 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.356 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.356 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:54.615 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:54.874 /dev/nbd0 00:06:54.874 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:54.874 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:54.874 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:54.874 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:54.874 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.874 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.874 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:54.874 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:54.874 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.874 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.874 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.874 1+0 records in 00:06:54.874 1+0 records out 00:06:54.874 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479322 s, 8.5 MB/s 00:06:54.874 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.874 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:54.874 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.874 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.874 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:54.874 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.874 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:54.874 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:54.874 /dev/nbd1 00:06:55.132 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:55.132 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:55.132 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:55.132 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.132 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.132 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.132 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:55.132 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.132 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.132 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.132 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.132 1+0 records in 00:06:55.132 1+0 records out 00:06:55.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541468 s, 7.6 MB/s 00:06:55.132 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.132 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.132 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.132 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.132 12:19:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.132 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.132 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:55.132 12:19:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:55.132 /dev/nbd10 00:06:55.133 12:19:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:55.133 12:19:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:55.133 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:55.133 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.133 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.133 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.133 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:55.133 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.133 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.133 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.133 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.133 1+0 records in 00:06:55.133 1+0 records out 00:06:55.133 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486368 s, 8.4 MB/s 00:06:55.133 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.133 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.133 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.133 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.133 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.133 12:19:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.133 12:19:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:55.133 12:19:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:55.392 /dev/nbd11 00:06:55.392 12:19:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:55.392 12:19:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:55.392 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:55.392 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.392 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.392 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.392 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:55.392 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.392 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.392 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.392 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.392 1+0 records in 00:06:55.392 1+0 records out 00:06:55.392 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433229 s, 9.5 MB/s 00:06:55.392 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.392 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.392 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.392 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.392 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.392 12:19:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.392 12:19:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:55.392 12:19:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:55.650 /dev/nbd12 00:06:55.650 12:19:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:55.650 12:19:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:55.650 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:55.651 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.651 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.651 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.651 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:55.651 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.651 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.651 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.651 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.651 1+0 records in 00:06:55.651 1+0 records out 00:06:55.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530946 s, 7.7 MB/s 00:06:55.651 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.651 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.651 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.651 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.651 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.651 12:19:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.651 12:19:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:55.651 12:19:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:55.909 /dev/nbd13 00:06:55.909 12:19:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:55.909 12:19:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:55.909 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:55.909 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.909 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.909 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.909 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:55.909 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.909 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.909 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.909 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.909 1+0 records in 00:06:55.909 1+0 records out 00:06:55.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400475 s, 10.2 MB/s 00:06:55.909 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.909 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.909 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.909 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.909 12:19:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.909 12:19:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.909 12:19:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:55.909 12:19:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:56.167 /dev/nbd14 00:06:56.167 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:56.167 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:56.167 12:19:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:06:56.167 12:19:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:56.167 12:19:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.167 12:19:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.167 12:19:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:06:56.167 12:19:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:56.167 12:19:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.167 12:19:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.167 12:19:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.167 1+0 records in 00:06:56.167 1+0 records out 00:06:56.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570335 s, 7.2 MB/s 00:06:56.167 12:19:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.167 12:19:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:56.167 12:19:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.167 12:19:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.167 12:19:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:56.167 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.167 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:56.167 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.167 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.167 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.426 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:56.426 { 00:06:56.426 "nbd_device": "/dev/nbd0", 00:06:56.426 "bdev_name": "Nvme0n1" 00:06:56.426 }, 00:06:56.426 { 00:06:56.426 "nbd_device": "/dev/nbd1", 00:06:56.426 "bdev_name": "Nvme1n1p1" 00:06:56.426 }, 00:06:56.426 { 00:06:56.426 "nbd_device": "/dev/nbd10", 00:06:56.426 "bdev_name": "Nvme1n1p2" 00:06:56.426 }, 00:06:56.426 { 00:06:56.426 "nbd_device": "/dev/nbd11", 00:06:56.426 "bdev_name": "Nvme2n1" 00:06:56.426 }, 00:06:56.426 { 00:06:56.426 "nbd_device": "/dev/nbd12", 00:06:56.426 "bdev_name": "Nvme2n2" 00:06:56.426 }, 00:06:56.426 { 00:06:56.426 "nbd_device": "/dev/nbd13", 00:06:56.426 "bdev_name": "Nvme2n3" 00:06:56.426 }, 00:06:56.426 { 00:06:56.426 "nbd_device": "/dev/nbd14", 00:06:56.426 "bdev_name": "Nvme3n1" 00:06:56.426 } 00:06:56.426 ]' 00:06:56.426 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:56.426 { 00:06:56.426 "nbd_device": "/dev/nbd0", 00:06:56.426 "bdev_name": "Nvme0n1" 00:06:56.426 }, 00:06:56.426 { 00:06:56.426 "nbd_device": "/dev/nbd1", 00:06:56.426 "bdev_name": "Nvme1n1p1" 00:06:56.426 }, 00:06:56.426 { 00:06:56.426 "nbd_device": "/dev/nbd10", 00:06:56.426 "bdev_name": "Nvme1n1p2" 00:06:56.426 }, 00:06:56.426 { 00:06:56.426 "nbd_device": "/dev/nbd11", 00:06:56.426 "bdev_name": "Nvme2n1" 00:06:56.426 }, 00:06:56.426 { 00:06:56.426 "nbd_device": "/dev/nbd12", 00:06:56.426 "bdev_name": "Nvme2n2" 00:06:56.426 }, 00:06:56.426 { 00:06:56.426 "nbd_device": "/dev/nbd13", 00:06:56.426 "bdev_name": "Nvme2n3" 00:06:56.426 }, 00:06:56.426 { 00:06:56.426 "nbd_device": "/dev/nbd14", 00:06:56.426 "bdev_name": "Nvme3n1" 00:06:56.426 } 00:06:56.426 ]' 00:06:56.426 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.426 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:56.426 /dev/nbd1 00:06:56.426 /dev/nbd10 00:06:56.426 /dev/nbd11 00:06:56.426 /dev/nbd12 00:06:56.426 /dev/nbd13 00:06:56.426 /dev/nbd14' 00:06:56.426 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.426 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:56.426 /dev/nbd1 00:06:56.426 /dev/nbd10 00:06:56.426 /dev/nbd11 00:06:56.426 /dev/nbd12 00:06:56.426 /dev/nbd13 00:06:56.426 /dev/nbd14' 00:06:56.426 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:56.426 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:56.426 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:56.426 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:56.426 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:56.426 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:56.426 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.426 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:56.426 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:56.426 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:56.426 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:56.426 256+0 records in 00:06:56.426 256+0 records out 00:06:56.426 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00892889 s, 117 MB/s 00:06:56.426 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.426 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:56.426 256+0 records in 00:06:56.426 256+0 records out 00:06:56.426 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0605749 s, 17.3 MB/s 00:06:56.426 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.426 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:56.684 256+0 records in 00:06:56.684 256+0 records out 00:06:56.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.06 s, 17.5 MB/s 00:06:56.684 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.684 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:56.684 256+0 records in 00:06:56.684 256+0 records out 00:06:56.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0606111 s, 17.3 MB/s 00:06:56.685 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.685 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:56.685 256+0 records in 00:06:56.685 256+0 records out 00:06:56.685 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.062287 s, 16.8 MB/s 00:06:56.685 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.685 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:56.685 256+0 records in 00:06:56.685 256+0 records out 00:06:56.685 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0579488 s, 18.1 MB/s 00:06:56.685 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.685 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:56.943 256+0 records in 00:06:56.943 256+0 records out 00:06:56.943 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0587383 s, 17.9 MB/s 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:56.943 256+0 records in 00:06:56.943 256+0 records out 00:06:56.943 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0579518 s, 18.1 MB/s 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.943 12:19:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:57.202 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:57.202 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:57.202 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:57.202 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.202 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.202 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:57.202 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.202 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.202 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.202 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.460 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.460 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.460 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.460 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.460 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.460 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.460 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.460 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.460 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.460 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:57.460 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:57.460 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:57.460 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:57.460 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.460 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.460 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:57.460 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.460 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.460 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.460 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:57.718 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:57.718 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:57.718 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:57.718 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.718 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.718 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:57.718 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.718 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.718 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.718 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:57.977 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:57.977 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:57.977 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:57.977 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.977 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.977 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:57.977 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.977 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.977 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.977 12:19:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:58.235 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:58.235 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:58.235 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:58.235 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.235 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.235 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:58.235 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.235 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.235 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.235 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:58.493 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:58.493 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:58.493 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:58.493 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.493 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.493 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:58.493 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.493 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.493 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.493 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.493 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.493 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:58.493 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.493 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:58.751 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:58.751 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:58.751 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.751 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:58.751 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:58.751 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:58.751 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:58.751 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:58.751 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:58.751 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:58.751 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.751 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:58.751 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:58.751 malloc_lvol_verify 00:06:58.751 12:19:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:59.009 5bc26e32-3131-4389-9ae6-020f990e553d 00:06:59.009 12:19:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:59.268 08ee2d95-ef73-4d36-bbc6-62b5ae567a31 00:06:59.268 12:19:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:59.526 /dev/nbd0 00:06:59.526 12:19:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:59.526 12:19:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:59.526 12:19:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:59.526 12:19:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:59.526 12:19:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:59.526 mke2fs 1.47.0 (5-Feb-2023) 00:06:59.526 Discarding device blocks: 0/4096 done 00:06:59.526 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:59.526 00:06:59.526 Allocating group tables: 0/1 done 00:06:59.526 Writing inode tables: 0/1 done 00:06:59.526 Creating journal (1024 blocks): done 00:06:59.526 Writing superblocks and filesystem accounting information: 0/1 done 00:06:59.526 00:06:59.526 12:19:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:59.526 12:19:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.526 12:19:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:59.526 12:19:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.526 12:19:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:59.526 12:19:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.526 12:19:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:59.785 12:19:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.785 12:19:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.785 12:19:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.785 12:19:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.785 12:19:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.785 12:19:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.785 12:19:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:59.785 12:19:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.785 12:19:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63209 00:06:59.785 12:19:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63209 ']' 00:06:59.785 12:19:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63209 00:06:59.785 12:19:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:59.785 12:19:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.785 12:19:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63209 00:06:59.785 12:19:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.785 12:19:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.785 killing process with pid 63209 00:06:59.785 12:19:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63209' 00:06:59.785 12:19:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63209 00:06:59.785 12:19:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63209 00:07:00.352 12:19:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:00.352 00:07:00.352 real 0m10.258s 00:07:00.352 user 0m14.864s 00:07:00.352 sys 0m3.347s 00:07:00.352 12:19:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.352 12:19:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:00.352 ************************************ 00:07:00.352 END TEST bdev_nbd 00:07:00.352 ************************************ 00:07:00.352 12:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:00.352 12:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:00.352 12:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:00.352 skipping fio tests on NVMe due to multi-ns failures. 00:07:00.352 12:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:00.352 12:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:00.352 12:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:00.352 12:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:00.352 12:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.352 12:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:00.352 ************************************ 00:07:00.352 START TEST bdev_verify 00:07:00.352 ************************************ 00:07:00.352 12:19:07 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:00.352 [2024-12-16 12:19:07.410704] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:00.352 [2024-12-16 12:19:07.410822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63616 ] 00:07:00.610 [2024-12-16 12:19:07.566593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:00.610 [2024-12-16 12:19:07.647466] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.610 [2024-12-16 12:19:07.647579] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.177 Running I/O for 5 seconds... 00:07:03.490 24320.00 IOPS, 95.00 MiB/s [2024-12-16T12:19:11.570Z] 23328.00 IOPS, 91.12 MiB/s [2024-12-16T12:19:12.504Z] 22720.00 IOPS, 88.75 MiB/s [2024-12-16T12:19:13.440Z] 22256.00 IOPS, 86.94 MiB/s [2024-12-16T12:19:13.440Z] 22476.80 IOPS, 87.80 MiB/s 00:07:06.334 Latency(us) 00:07:06.334 [2024-12-16T12:19:13.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:06.334 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.334 Verification LBA range: start 0x0 length 0xbd0bd 00:07:06.334 Nvme0n1 : 5.05 1595.47 6.23 0.00 0.00 80074.60 12703.90 104051.00 00:07:06.334 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.334 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:06.334 Nvme0n1 : 5.07 1603.71 6.26 0.00 0.00 79042.82 6604.01 69367.34 00:07:06.334 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.334 Verification LBA range: start 0x0 length 0x4ff80 00:07:06.334 Nvme1n1p1 : 5.06 1595.01 6.23 0.00 0.00 79996.77 10889.06 105664.20 00:07:06.334 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.334 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:06.334 Nvme1n1p1 : 5.08 1611.09 6.29 0.00 0.00 78640.73 13712.15 71383.83 00:07:06.334 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.334 Verification LBA range: start 0x0 length 0x4ff7f 00:07:06.334 Nvme1n1p2 : 5.06 1594.02 6.23 0.00 0.00 79932.46 12653.49 101631.21 00:07:06.334 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.334 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:06.334 Nvme1n1p2 : 5.09 1610.64 6.29 0.00 0.00 78571.72 11342.77 71787.13 00:07:06.334 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.334 Verification LBA range: start 0x0 length 0x80000 00:07:06.334 Nvme2n1 : 5.06 1593.58 6.22 0.00 0.00 79844.52 13006.38 97598.23 00:07:06.334 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.334 Verification LBA range: start 0x80000 length 0x80000 00:07:06.334 Nvme2n1 : 5.09 1610.23 6.29 0.00 0.00 78515.46 8418.86 71787.13 00:07:06.334 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.334 Verification LBA range: start 0x0 length 0x80000 00:07:06.334 Nvme2n2 : 5.06 1593.13 6.22 0.00 0.00 79748.66 13308.85 99614.72 00:07:06.334 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.334 Verification LBA range: start 0x80000 length 0x80000 00:07:06.334 Nvme2n2 : 5.05 1597.88 6.24 0.00 0.00 79848.55 16535.24 71787.13 00:07:06.334 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.334 Verification LBA range: start 0x0 length 0x80000 00:07:06.334 Nvme2n3 : 5.06 1592.72 6.22 0.00 0.00 79645.32 13712.15 100824.62 00:07:06.334 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.334 Verification LBA range: start 0x80000 length 0x80000 00:07:06.334 Nvme2n3 : 5.05 1597.42 6.24 0.00 0.00 79680.00 16131.94 70980.53 00:07:06.334 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.334 Verification LBA range: start 0x0 length 0x20000 00:07:06.334 Nvme3n1 : 5.06 1592.30 6.22 0.00 0.00 79556.63 8822.15 103244.41 00:07:06.334 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.334 Verification LBA range: start 0x20000 length 0x20000 00:07:06.334 Nvme3n1 : 5.05 1596.98 6.24 0.00 0.00 79537.58 16535.24 69367.34 00:07:06.334 [2024-12-16T12:19:13.440Z] =================================================================================================================== 00:07:06.334 [2024-12-16T12:19:13.440Z] Total : 22384.17 87.44 0.00 0.00 79470.70 6604.01 105664.20 00:07:07.710 00:07:07.710 real 0m7.088s 00:07:07.710 user 0m13.310s 00:07:07.710 sys 0m0.211s 00:07:07.710 12:19:14 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.710 12:19:14 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:07.710 ************************************ 00:07:07.710 END TEST bdev_verify 00:07:07.710 ************************************ 00:07:07.710 12:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:07.710 12:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:07.710 12:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.710 12:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:07.710 ************************************ 00:07:07.710 START TEST bdev_verify_big_io 00:07:07.710 ************************************ 00:07:07.710 12:19:14 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:07.710 [2024-12-16 12:19:14.541588] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:07.710 [2024-12-16 12:19:14.541700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63714 ] 00:07:07.710 [2024-12-16 12:19:14.699722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.710 [2024-12-16 12:19:14.794659] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.710 [2024-12-16 12:19:14.794733] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.643 Running I/O for 5 seconds... 00:07:13.824 1954.00 IOPS, 122.12 MiB/s [2024-12-16T12:19:21.865Z] 3004.50 IOPS, 187.78 MiB/s [2024-12-16T12:19:21.865Z] 3519.33 IOPS, 219.96 MiB/s 00:07:14.759 Latency(us) 00:07:14.759 [2024-12-16T12:19:21.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:14.759 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:14.759 Verification LBA range: start 0x0 length 0xbd0b 00:07:14.759 Nvme0n1 : 5.85 123.60 7.72 0.00 0.00 968123.29 11897.30 1206669.00 00:07:14.759 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:14.759 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:14.759 Nvme0n1 : 5.78 129.55 8.10 0.00 0.00 949996.04 13006.38 1148594.02 00:07:14.759 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:14.759 Verification LBA range: start 0x0 length 0x4ff8 00:07:14.759 Nvme1n1p1 : 5.92 121.11 7.57 0.00 0.00 976844.00 75416.81 1193763.45 00:07:14.759 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:14.759 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:14.759 Nvme1n1p1 : 5.87 131.33 8.21 0.00 0.00 902929.02 64527.75 955010.76 00:07:14.759 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:14.759 Verification LBA range: start 0x0 length 0x4ff7 00:07:14.759 Nvme1n1p2 : 5.92 127.50 7.97 0.00 0.00 898290.97 95581.74 1000180.18 00:07:14.759 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:14.759 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:14.759 Nvme1n1p2 : 5.87 131.27 8.20 0.00 0.00 874613.91 87515.77 909841.33 00:07:14.759 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:14.759 Verification LBA range: start 0x0 length 0x8000 00:07:14.759 Nvme2n1 : 5.93 126.75 7.92 0.00 0.00 889001.38 68157.44 1471232.79 00:07:14.759 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:14.759 Verification LBA range: start 0x8000 length 0x8000 00:07:14.759 Nvme2n1 : 5.88 135.32 8.46 0.00 0.00 829218.92 87515.77 942105.21 00:07:14.759 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:14.759 Verification LBA range: start 0x0 length 0x8000 00:07:14.759 Nvme2n2 : 6.03 130.55 8.16 0.00 0.00 834383.52 59688.17 1484138.34 00:07:14.759 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:14.759 Verification LBA range: start 0x8000 length 0x8000 00:07:14.759 Nvme2n2 : 5.99 144.57 9.04 0.00 0.00 758157.45 34683.67 948557.98 00:07:14.759 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:14.759 Verification LBA range: start 0x0 length 0x8000 00:07:14.759 Nvme2n3 : 6.06 139.72 8.73 0.00 0.00 762286.59 29642.44 1516402.22 00:07:14.759 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:14.759 Verification LBA range: start 0x8000 length 0x8000 00:07:14.759 Nvme2n3 : 5.99 149.37 9.34 0.00 0.00 718876.69 26012.75 967916.31 00:07:14.759 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:14.759 Verification LBA range: start 0x0 length 0x2000 00:07:14.759 Nvme3n1 : 6.11 159.18 9.95 0.00 0.00 652816.46 620.70 1542213.32 00:07:14.759 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:14.759 Verification LBA range: start 0x2000 length 0x2000 00:07:14.759 Nvme3n1 : 6.06 169.58 10.60 0.00 0.00 620355.44 721.53 980821.86 00:07:14.759 [2024-12-16T12:19:21.865Z] =================================================================================================================== 00:07:14.759 [2024-12-16T12:19:21.865Z] Total : 1919.39 119.96 0.00 0.00 819287.67 620.70 1542213.32 00:07:16.153 00:07:16.153 real 0m8.696s 00:07:16.153 user 0m16.480s 00:07:16.153 sys 0m0.235s 00:07:16.153 12:19:23 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.153 ************************************ 00:07:16.153 END TEST bdev_verify_big_io 00:07:16.153 12:19:23 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:16.153 ************************************ 00:07:16.153 12:19:23 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:16.153 12:19:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:16.153 12:19:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.153 12:19:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:16.153 ************************************ 00:07:16.153 START TEST bdev_write_zeroes 00:07:16.153 ************************************ 00:07:16.153 12:19:23 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:16.412 [2024-12-16 12:19:23.295800] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:16.412 [2024-12-16 12:19:23.295912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63823 ] 00:07:16.412 [2024-12-16 12:19:23.447686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.673 [2024-12-16 12:19:23.547695] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.239 Running I/O for 1 seconds... 00:07:18.176 64512.00 IOPS, 252.00 MiB/s 00:07:18.176 Latency(us) 00:07:18.176 [2024-12-16T12:19:25.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:18.176 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:18.176 Nvme0n1 : 1.02 9190.52 35.90 0.00 0.00 13892.00 6704.84 29642.44 00:07:18.176 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:18.176 Nvme1n1p1 : 1.02 9179.43 35.86 0.00 0.00 13894.55 10889.06 26819.35 00:07:18.176 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:18.176 Nvme1n1p2 : 1.03 9168.35 35.81 0.00 0.00 13824.73 10536.17 23996.26 00:07:18.176 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:18.176 Nvme2n1 : 1.03 9157.99 35.77 0.00 0.00 13792.30 10838.65 23391.31 00:07:18.176 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:18.176 Nvme2n2 : 1.03 9147.76 35.73 0.00 0.00 13767.92 10737.82 22887.19 00:07:18.176 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:18.176 Nvme2n3 : 1.03 9137.52 35.69 0.00 0.00 13750.32 10183.29 22887.19 00:07:18.176 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:18.176 Nvme3n1 : 1.03 9127.28 35.65 0.00 0.00 13728.33 8570.09 24601.21 00:07:18.176 [2024-12-16T12:19:25.282Z] =================================================================================================================== 00:07:18.176 [2024-12-16T12:19:25.282Z] Total : 64108.86 250.43 0.00 0.00 13807.16 6704.84 29642.44 00:07:19.116 00:07:19.116 real 0m2.674s 00:07:19.116 user 0m2.375s 00:07:19.116 sys 0m0.186s 00:07:19.116 12:19:25 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.116 12:19:25 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:19.116 ************************************ 00:07:19.116 END TEST bdev_write_zeroes 00:07:19.116 ************************************ 00:07:19.116 12:19:25 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:19.116 12:19:25 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:19.116 12:19:25 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.116 12:19:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:19.116 ************************************ 00:07:19.116 START TEST bdev_json_nonenclosed 00:07:19.116 ************************************ 00:07:19.116 12:19:25 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:19.116 [2024-12-16 12:19:26.018262] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:19.116 [2024-12-16 12:19:26.018376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63876 ] 00:07:19.116 [2024-12-16 12:19:26.178420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.377 [2024-12-16 12:19:26.273924] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.377 [2024-12-16 12:19:26.273993] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:19.377 [2024-12-16 12:19:26.274009] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:19.377 [2024-12-16 12:19:26.274018] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.377 00:07:19.377 real 0m0.490s 00:07:19.377 user 0m0.302s 00:07:19.377 sys 0m0.085s 00:07:19.377 12:19:26 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.377 12:19:26 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:19.377 ************************************ 00:07:19.377 END TEST bdev_json_nonenclosed 00:07:19.377 ************************************ 00:07:19.638 12:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:19.638 12:19:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:19.638 12:19:26 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.638 12:19:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:19.638 ************************************ 00:07:19.638 START TEST bdev_json_nonarray 00:07:19.638 ************************************ 00:07:19.638 12:19:26 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:19.638 [2024-12-16 12:19:26.564532] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:19.638 [2024-12-16 12:19:26.564618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63896 ] 00:07:19.638 [2024-12-16 12:19:26.715724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.899 [2024-12-16 12:19:26.813749] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.899 [2024-12-16 12:19:26.813835] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:19.899 [2024-12-16 12:19:26.813851] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:19.899 [2024-12-16 12:19:26.813861] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.899 00:07:19.899 real 0m0.479s 00:07:19.899 user 0m0.284s 00:07:19.899 sys 0m0.091s 00:07:19.899 12:19:26 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.899 12:19:26 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:19.899 ************************************ 00:07:19.899 END TEST bdev_json_nonarray 00:07:19.899 ************************************ 00:07:20.159 12:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:20.159 12:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:20.159 12:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:20.159 12:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.159 12:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.159 12:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:20.159 ************************************ 00:07:20.159 START TEST bdev_gpt_uuid 00:07:20.159 ************************************ 00:07:20.159 12:19:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:20.159 12:19:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:20.159 12:19:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:20.159 12:19:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63927 00:07:20.159 12:19:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:20.159 12:19:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63927 00:07:20.159 12:19:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63927 ']' 00:07:20.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.159 12:19:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.159 12:19:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:20.159 12:19:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.159 12:19:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.159 12:19:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.159 12:19:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:20.159 [2024-12-16 12:19:27.129111] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:20.159 [2024-12-16 12:19:27.129241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63927 ] 00:07:20.420 [2024-12-16 12:19:27.288511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.420 [2024-12-16 12:19:27.425376] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.990 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.990 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:20.990 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:20.990 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.990 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:21.559 Some configs were skipped because the RPC state that can call them passed over. 00:07:21.559 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.559 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:21.559 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.559 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:21.559 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.559 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:21.559 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.559 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:21.559 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.559 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:21.559 { 00:07:21.559 "name": "Nvme1n1p1", 00:07:21.559 "aliases": [ 00:07:21.559 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:21.559 ], 00:07:21.559 "product_name": "GPT Disk", 00:07:21.559 "block_size": 4096, 00:07:21.559 "num_blocks": 655104, 00:07:21.559 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:21.559 "assigned_rate_limits": { 00:07:21.559 "rw_ios_per_sec": 0, 00:07:21.559 "rw_mbytes_per_sec": 0, 00:07:21.559 "r_mbytes_per_sec": 0, 00:07:21.559 "w_mbytes_per_sec": 0 00:07:21.559 }, 00:07:21.559 "claimed": false, 00:07:21.559 "zoned": false, 00:07:21.559 "supported_io_types": { 00:07:21.559 "read": true, 00:07:21.559 "write": true, 00:07:21.560 "unmap": true, 00:07:21.560 "flush": true, 00:07:21.560 "reset": true, 00:07:21.560 "nvme_admin": false, 00:07:21.560 "nvme_io": false, 00:07:21.560 "nvme_io_md": false, 00:07:21.560 "write_zeroes": true, 00:07:21.560 "zcopy": false, 00:07:21.560 "get_zone_info": false, 00:07:21.560 "zone_management": false, 00:07:21.560 "zone_append": false, 00:07:21.560 "compare": true, 00:07:21.560 "compare_and_write": false, 00:07:21.560 "abort": true, 00:07:21.560 "seek_hole": false, 00:07:21.560 "seek_data": false, 00:07:21.560 "copy": true, 00:07:21.560 "nvme_iov_md": false 00:07:21.560 }, 00:07:21.560 "driver_specific": { 00:07:21.560 "gpt": { 00:07:21.560 "base_bdev": "Nvme1n1", 00:07:21.560 "offset_blocks": 256, 00:07:21.560 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:21.560 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:21.560 "partition_name": "SPDK_TEST_first" 00:07:21.560 } 00:07:21.560 } 00:07:21.560 } 00:07:21.560 ]' 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:21.560 { 00:07:21.560 "name": "Nvme1n1p2", 00:07:21.560 "aliases": [ 00:07:21.560 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:21.560 ], 00:07:21.560 "product_name": "GPT Disk", 00:07:21.560 "block_size": 4096, 00:07:21.560 "num_blocks": 655103, 00:07:21.560 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:21.560 "assigned_rate_limits": { 00:07:21.560 "rw_ios_per_sec": 0, 00:07:21.560 "rw_mbytes_per_sec": 0, 00:07:21.560 "r_mbytes_per_sec": 0, 00:07:21.560 "w_mbytes_per_sec": 0 00:07:21.560 }, 00:07:21.560 "claimed": false, 00:07:21.560 "zoned": false, 00:07:21.560 "supported_io_types": { 00:07:21.560 "read": true, 00:07:21.560 "write": true, 00:07:21.560 "unmap": true, 00:07:21.560 "flush": true, 00:07:21.560 "reset": true, 00:07:21.560 "nvme_admin": false, 00:07:21.560 "nvme_io": false, 00:07:21.560 "nvme_io_md": false, 00:07:21.560 "write_zeroes": true, 00:07:21.560 "zcopy": false, 00:07:21.560 "get_zone_info": false, 00:07:21.560 "zone_management": false, 00:07:21.560 "zone_append": false, 00:07:21.560 "compare": true, 00:07:21.560 "compare_and_write": false, 00:07:21.560 "abort": true, 00:07:21.560 "seek_hole": false, 00:07:21.560 "seek_data": false, 00:07:21.560 "copy": true, 00:07:21.560 "nvme_iov_md": false 00:07:21.560 }, 00:07:21.560 "driver_specific": { 00:07:21.560 "gpt": { 00:07:21.560 "base_bdev": "Nvme1n1", 00:07:21.560 "offset_blocks": 655360, 00:07:21.560 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:21.560 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:21.560 "partition_name": "SPDK_TEST_second" 00:07:21.560 } 00:07:21.560 } 00:07:21.560 } 00:07:21.560 ]' 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63927 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63927 ']' 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63927 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63927 00:07:21.560 killing process with pid 63927 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63927' 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63927 00:07:21.560 12:19:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63927 00:07:23.473 00:07:23.473 real 0m3.067s 00:07:23.473 user 0m3.252s 00:07:23.473 sys 0m0.368s 00:07:23.473 12:19:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.473 ************************************ 00:07:23.473 END TEST bdev_gpt_uuid 00:07:23.473 ************************************ 00:07:23.473 12:19:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:23.473 12:19:30 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:23.473 12:19:30 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:23.473 12:19:30 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:23.473 12:19:30 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:23.473 12:19:30 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:23.473 12:19:30 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:23.473 12:19:30 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:23.473 12:19:30 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:23.473 12:19:30 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:23.473 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:23.733 Waiting for block devices as requested 00:07:23.733 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:23.734 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:23.993 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:23.993 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:29.257 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:29.257 12:19:35 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:29.257 12:19:35 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:29.257 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:29.257 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:29.257 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:29.257 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:29.257 12:19:36 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:29.257 00:07:29.257 real 0m54.386s 00:07:29.257 user 1m9.785s 00:07:29.257 sys 0m7.428s 00:07:29.257 12:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.257 ************************************ 00:07:29.257 END TEST blockdev_nvme_gpt 00:07:29.257 ************************************ 00:07:29.257 12:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:29.516 12:19:36 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:29.516 12:19:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.516 12:19:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.516 12:19:36 -- common/autotest_common.sh@10 -- # set +x 00:07:29.516 ************************************ 00:07:29.516 START TEST nvme 00:07:29.516 ************************************ 00:07:29.516 12:19:36 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:29.516 * Looking for test storage... 00:07:29.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:29.516 12:19:36 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:29.516 12:19:36 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:29.516 12:19:36 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:29.516 12:19:36 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:29.516 12:19:36 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.516 12:19:36 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.516 12:19:36 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.516 12:19:36 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.516 12:19:36 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.516 12:19:36 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.516 12:19:36 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.516 12:19:36 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.516 12:19:36 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.516 12:19:36 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.516 12:19:36 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.516 12:19:36 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:29.516 12:19:36 nvme -- scripts/common.sh@345 -- # : 1 00:07:29.516 12:19:36 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.516 12:19:36 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.516 12:19:36 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:29.516 12:19:36 nvme -- scripts/common.sh@353 -- # local d=1 00:07:29.516 12:19:36 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.516 12:19:36 nvme -- scripts/common.sh@355 -- # echo 1 00:07:29.516 12:19:36 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.516 12:19:36 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:29.516 12:19:36 nvme -- scripts/common.sh@353 -- # local d=2 00:07:29.516 12:19:36 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.516 12:19:36 nvme -- scripts/common.sh@355 -- # echo 2 00:07:29.516 12:19:36 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.516 12:19:36 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.516 12:19:36 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.516 12:19:36 nvme -- scripts/common.sh@368 -- # return 0 00:07:29.516 12:19:36 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.516 12:19:36 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:29.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.516 --rc genhtml_branch_coverage=1 00:07:29.516 --rc genhtml_function_coverage=1 00:07:29.516 --rc genhtml_legend=1 00:07:29.516 --rc geninfo_all_blocks=1 00:07:29.516 --rc geninfo_unexecuted_blocks=1 00:07:29.516 00:07:29.516 ' 00:07:29.516 12:19:36 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:29.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.516 --rc genhtml_branch_coverage=1 00:07:29.516 --rc genhtml_function_coverage=1 00:07:29.516 --rc genhtml_legend=1 00:07:29.516 --rc geninfo_all_blocks=1 00:07:29.516 --rc geninfo_unexecuted_blocks=1 00:07:29.516 00:07:29.516 ' 00:07:29.516 12:19:36 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:29.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.516 --rc genhtml_branch_coverage=1 00:07:29.516 --rc genhtml_function_coverage=1 00:07:29.516 --rc genhtml_legend=1 00:07:29.516 --rc geninfo_all_blocks=1 00:07:29.516 --rc geninfo_unexecuted_blocks=1 00:07:29.516 00:07:29.516 ' 00:07:29.516 12:19:36 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:29.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.516 --rc genhtml_branch_coverage=1 00:07:29.516 --rc genhtml_function_coverage=1 00:07:29.516 --rc genhtml_legend=1 00:07:29.516 --rc geninfo_all_blocks=1 00:07:29.516 --rc geninfo_unexecuted_blocks=1 00:07:29.516 00:07:29.516 ' 00:07:29.516 12:19:36 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:30.083 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:30.341 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:30.341 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:30.341 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:30.599 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:30.599 12:19:37 nvme -- nvme/nvme.sh@79 -- # uname 00:07:30.599 12:19:37 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:30.599 12:19:37 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:30.599 12:19:37 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:30.599 12:19:37 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:30.599 12:19:37 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:30.599 12:19:37 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:30.599 12:19:37 nvme -- common/autotest_common.sh@1075 -- # stubpid=64564 00:07:30.599 Waiting for stub to ready for secondary processes... 00:07:30.599 12:19:37 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:30.599 12:19:37 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:30.599 12:19:37 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64564 ]] 00:07:30.599 12:19:37 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:30.599 12:19:37 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:30.599 [2024-12-16 12:19:37.536205] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:30.600 [2024-12-16 12:19:37.536313] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:31.165 [2024-12-16 12:19:38.259861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:31.423 [2024-12-16 12:19:38.352205] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.423 [2024-12-16 12:19:38.352350] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.423 [2024-12-16 12:19:38.352422] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.423 [2024-12-16 12:19:38.365335] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:31.423 [2024-12-16 12:19:38.365366] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:31.423 [2024-12-16 12:19:38.378559] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:31.423 [2024-12-16 12:19:38.378788] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:31.423 [2024-12-16 12:19:38.382969] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:31.423 [2024-12-16 12:19:38.383327] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:31.423 [2024-12-16 12:19:38.384151] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:31.423 [2024-12-16 12:19:38.388666] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:31.423 [2024-12-16 12:19:38.388805] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:31.423 [2024-12-16 12:19:38.388851] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:31.423 [2024-12-16 12:19:38.390469] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:31.423 [2024-12-16 12:19:38.390586] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:31.423 [2024-12-16 12:19:38.390626] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:31.423 [2024-12-16 12:19:38.390655] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:31.423 [2024-12-16 12:19:38.390681] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:31.423 12:19:38 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:31.423 done. 00:07:31.423 12:19:38 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:31.423 12:19:38 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:31.423 12:19:38 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:31.423 12:19:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.423 12:19:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:31.423 ************************************ 00:07:31.423 START TEST nvme_reset 00:07:31.423 ************************************ 00:07:31.423 12:19:38 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:31.681 Initializing NVMe Controllers 00:07:31.681 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:31.681 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:31.681 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:31.681 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:31.681 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:31.681 00:07:31.681 real 0m0.239s 00:07:31.681 user 0m0.077s 00:07:31.681 sys 0m0.117s 00:07:31.681 12:19:38 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.681 12:19:38 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:31.681 ************************************ 00:07:31.681 END TEST nvme_reset 00:07:31.681 ************************************ 00:07:31.681 12:19:38 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:31.681 12:19:38 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.681 12:19:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.681 12:19:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:31.943 ************************************ 00:07:31.943 START TEST nvme_identify 00:07:31.943 ************************************ 00:07:31.943 12:19:38 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:31.943 12:19:38 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:31.943 12:19:38 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:31.943 12:19:38 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:31.943 12:19:38 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:31.943 12:19:38 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:31.943 12:19:38 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:31.943 12:19:38 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:31.943 12:19:38 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:31.943 12:19:38 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:31.943 12:19:38 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:31.943 12:19:38 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:31.943 12:19:38 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:31.943 ===================================================== 00:07:31.943 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:31.943 ===================================================== 00:07:31.943 Controller Capabilities/Features 00:07:31.943 ================================ 00:07:31.943 Vendor ID: 1b36 00:07:31.943 Subsystem Vendor ID: 1af4 00:07:31.943 Serial Number: 12340 00:07:31.943 Model Number: QEMU NVMe Ctrl 00:07:31.943 Firmware Version: 8.0.0 00:07:31.943 Recommended Arb Burst: 6 00:07:31.943 IEEE OUI Identifier: 00 54 52 00:07:31.943 Multi-path I/O 00:07:31.943 May have multiple subsystem ports: No 00:07:31.943 May have multiple controllers: No 00:07:31.943 Associated with SR-IOV VF: No 00:07:31.943 Max Data Transfer Size: 524288 00:07:31.943 Max Number of Namespaces: 256 00:07:31.943 Max Number of I/O Queues: 64 00:07:31.943 NVMe Specification Version (VS): 1.4 00:07:31.943 NVMe Specification Version (Identify): 1.4 00:07:31.943 Maximum Queue Entries: 2048 00:07:31.943 Contiguous Queues Required: Yes 00:07:31.943 Arbitration Mechanisms Supported 00:07:31.943 Weighted Round Robin: Not Supported 00:07:31.943 Vendor Specific: Not Supported 00:07:31.943 Reset Timeout: 7500 ms 00:07:31.943 Doorbell Stride: 4 bytes 00:07:31.943 NVM Subsystem Reset: Not Supported 00:07:31.943 Command Sets Supported 00:07:31.943 NVM Command Set: Supported 00:07:31.943 Boot Partition: Not Supported 00:07:31.943 Memory Page Size Minimum: 4096 bytes 00:07:31.943 Memory Page Size Maximum: 65536 bytes 00:07:31.943 Persistent Memory Region: Not Supported 00:07:31.943 Optional Asynchronous Events Supported 00:07:31.943 Namespace Attribute Notices: Supported 00:07:31.943 Firmware Activation Notices: Not Supported 00:07:31.943 ANA Change Notices: Not Supported 00:07:31.943 PLE Aggregate Log Change Notices: Not Supported 00:07:31.943 LBA Status Info Alert Notices: Not Supported 00:07:31.943 EGE Aggregate Log Change Notices: Not Supported 00:07:31.943 Normal NVM Subsystem Shutdown event: Not Supported 00:07:31.943 Zone Descriptor Change Notices: Not Supported 00:07:31.943 Discovery Log Change Notices: Not Supported 00:07:31.943 Controller Attributes 00:07:31.943 128-bit Host Identifier: Not Supported 00:07:31.943 Non-Operational Permissive Mode: Not Supported 00:07:31.943 NVM Sets: Not Supported 00:07:31.943 Read Recovery Levels: Not Supported 00:07:31.943 Endurance Groups: Not Supported 00:07:31.943 Predictable Latency Mode: Not Supported 00:07:31.943 Traffic Based Keep ALive: Not Supported 00:07:31.943 Namespace Granularity: Not Supported 00:07:31.943 SQ Associations: Not Supported 00:07:31.943 UUID List: Not Supported 00:07:31.943 Multi-Domain Subsystem: Not Supported 00:07:31.943 Fixed Capacity Management: Not Supported 00:07:31.943 Variable Capacity Management: Not Supported 00:07:31.943 Delete Endurance Group: Not Supported 00:07:31.943 Delete NVM Set: Not Supported 00:07:31.943 Extended LBA Formats Supported: Supported 00:07:31.943 Flexible Data Placement Supported: Not Supported 00:07:31.943 00:07:31.943 Controller Memory Buffer Support 00:07:31.943 ================================ 00:07:31.943 Supported: No 00:07:31.943 00:07:31.943 Persistent Memory Region Support 00:07:31.943 ================================ 00:07:31.943 Supported: No 00:07:31.943 00:07:31.943 Admin Command Set Attributes 00:07:31.943 ============================ 00:07:31.943 Security Send/Receive: Not Supported 00:07:31.943 Format NVM: Supported 00:07:31.943 Firmware Activate/Download: Not Supported 00:07:31.943 Namespace Management: Supported 00:07:31.943 Device Self-Test: Not Supported 00:07:31.943 Directives: Supported 00:07:31.943 NVMe-MI: Not Supported 00:07:31.943 Virtualization Management: Not Supported 00:07:31.943 Doorbell Buffer Config: Supported 00:07:31.943 Get LBA Status Capability: Not Supported 00:07:31.943 Command & Feature Lockdown Capability: Not Supported 00:07:31.943 Abort Command Limit: 4 00:07:31.943 Async Event Request Limit: 4 00:07:31.943 Number of Firmware Slots: N/A 00:07:31.943 Firmware Slot 1 Read-Only: N/A 00:07:31.943 Firmware Activation Without Reset: N/A 00:07:31.943 Multiple Update Detection Support: N/A 00:07:31.943 Firmware Update Granularity: No Information Provided 00:07:31.943 Per-Namespace SMART Log: Yes 00:07:31.943 Asymmetric Namespace Access Log Page: Not Supported 00:07:31.943 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:31.943 Command Effects Log Page: Supported 00:07:31.943 Get Log Page Extended Data: Supported 00:07:31.943 Telemetry Log Pages: Not Supported 00:07:31.944 Persistent Event Log Pages: Not Supported 00:07:31.944 Supported Log Pages Log Page: May Support 00:07:31.944 Commands Supported & Effects Log Page: Not Supported 00:07:31.944 Feature Identifiers & Effects Log Page:May Support 00:07:31.944 NVMe-MI Commands & Effects Log Page: May Support 00:07:31.944 Data Area 4 for Telemetry Log: Not Supported 00:07:31.944 Error Log Page Entries Supported: 1 00:07:31.944 Keep Alive: Not Supported 00:07:31.944 00:07:31.944 NVM Command Set Attributes 00:07:31.944 ========================== 00:07:31.944 Submission Queue Entry Size 00:07:31.944 Max: 64 00:07:31.944 Min: 64 00:07:31.944 Completion Queue Entry Size 00:07:31.944 Max: 16 00:07:31.944 Min: 16 00:07:31.944 Number of Namespaces: 256 00:07:31.944 Compare Command: Supported 00:07:31.944 Write Uncorrectable Command: Not Supported 00:07:31.944 Dataset Management Command: Supported 00:07:31.944 Write Zeroes Command: Supported 00:07:31.944 Set Features Save Field: Supported 00:07:31.944 Reservations: Not Supported 00:07:31.944 Timestamp: Supported 00:07:31.944 Copy: Supported 00:07:31.944 Volatile Write Cache: Present 00:07:31.944 Atomic Write Unit (Normal): 1 00:07:31.944 Atomic Write Unit (PFail): 1 00:07:31.944 Atomic Compare & Write Unit: 1 00:07:31.944 Fused Compare & Write: Not Supported 00:07:31.944 Scatter-Gather List 00:07:31.944 SGL Command Set: Supported 00:07:31.944 SGL Keyed: Not Supported 00:07:31.944 SGL Bit Bucket Descriptor: Not Supported 00:07:31.944 SGL Metadata Pointer: Not Supported 00:07:31.944 Oversized SGL: Not Supported 00:07:31.944 SGL Metadata Address: Not Supported 00:07:31.944 SGL Offset: Not Supported 00:07:31.944 Transport SGL Data Block: Not Supported 00:07:31.944 Replay Protected Memory Block: Not Supported 00:07:31.944 00:07:31.944 Firmware Slot Information 00:07:31.944 ========================= 00:07:31.944 Active slot: 1 00:07:31.944 Slot 1 Firmware Revision: 1.0 00:07:31.944 00:07:31.944 00:07:31.944 Commands Supported and Effects 00:07:31.944 ============================== 00:07:31.944 Admin Commands 00:07:31.944 -------------- 00:07:31.944 Delete I/O Submission Queue (00h): Supported 00:07:31.944 Create I/O Submission Queue (01h): Supported 00:07:31.944 Get Log Page (02h): Supported 00:07:31.944 Delete I/O Completion Queue (04h): Supported 00:07:31.944 Create I/O Completion Queue (05h): Supported 00:07:31.944 Identify (06h): Supported 00:07:31.944 Abort (08h): Supported 00:07:31.944 Set Features (09h): Supported 00:07:31.944 Get Features (0Ah): Supported 00:07:31.944 Asynchronous Event Request (0Ch): Supported 00:07:31.944 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:31.944 Directive Send (19h): Supported 00:07:31.944 Directive Receive (1Ah): Supported 00:07:31.944 Virtualization Management (1Ch): Supported 00:07:31.944 Doorbell Buffer Config (7Ch): Supported 00:07:31.944 Format NVM (80h): Supported LBA-Change 00:07:31.944 I/O Commands 00:07:31.944 ------------ 00:07:31.944 Flush (00h): Supported LBA-Change 00:07:31.944 Write (01h): Supported LBA-Change 00:07:31.944 Read (02h): Supported 00:07:31.944 Compare (05h): Supported 00:07:31.944 Write Zeroes (08h): Supported LBA-Change 00:07:31.944 Dataset Management (09h): Supported LBA-Change 00:07:31.944 Unknown (0Ch): Supported 00:07:31.944 Unknown (12h): Supported 00:07:31.944 Copy (19h): Supported LBA-Change 00:07:31.944 Unknown (1Dh): Supported LBA-Change 00:07:31.944 00:07:31.944 Error Log 00:07:31.944 ========= 00:07:31.944 00:07:31.944 Arbitration 00:07:31.944 =========== 00:07:31.944 Arbitration Burst: no limit 00:07:31.944 00:07:31.944 Power Management 00:07:31.944 ================ 00:07:31.944 Number of Power States: 1 00:07:31.944 Current Power State: Power State #0 00:07:31.944 Power State #0: 00:07:31.944 Max Power: 25.00 W 00:07:31.944 Non-Operational State: Operational 00:07:31.944 Entry Latency: 16 microseconds 00:07:31.944 Exit Latency: 4 microseconds 00:07:31.944 Relative Read Throughput: 0 00:07:31.944 Relative Read Latency: 0 00:07:31.944 Relative Write Throughput: 0 00:07:31.944 Relative Write Latency: 0 00:07:31.944 Idle Power: Not Reported 00:07:31.944 Active Power: Not Reported 00:07:31.944 Non-Operational Permissive Mode: Not Supported 00:07:31.944 00:07:31.944 Health Information 00:07:31.944 ================== 00:07:31.944 Critical Warnings: 00:07:31.944 Available Spare Space: OK 00:07:31.944 Temperature: OK 00:07:31.944 Device Reliability: OK 00:07:31.944 Read Only: No 00:07:31.944 Volatile Memory Backup: OK 00:07:31.944 Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.944 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:31.944 Available Spare: 0% 00:07:31.944 Available Spare Threshold: 0% 00:07:31.944 Life Percentage Used: 0% 00:07:31.944 Data Units Read: 698 00:07:31.944 Data Units Written: 626 00:07:31.944 Host Read Commands: 36852 00:07:31.944 Host Write Commands: 36638 00:07:31.944 Controller Busy Time: 0 minutes 00:07:31.944 Power Cycles: 0 00:07:31.944 Power On Hours: 0 hours 00:07:31.944 Unsafe Shutdowns: 0 00:07:31.944 Unrecoverable Media Errors: 0 00:07:31.944 Lifetime Error Log Entries: 0 00:07:31.944 Warning Temperature Time: 0 minutes 00:07:31.944 Critical Temperature Time: 0 minutes 00:07:31.944 00:07:31.944 Number of Queues 00:07:31.944 ================ 00:07:31.944 Number of I/O Submission Queues: 64 00:07:31.944 Number of I/O Completion Queues: 64 00:07:31.944 00:07:31.944 ZNS Specific Controller Data 00:07:31.944 ============================ 00:07:31.944 Zone Append Size Limit: 0 00:07:31.944 00:07:31.944 00:07:31.944 Active Namespaces 00:07:31.944 ================= 00:07:31.944 Namespace ID:1 00:07:31.944 Error Recovery Timeout: Unlimited 00:07:31.944 Command Set Identifier: NVM (00h) 00:07:31.944 Deallocate: Supported 00:07:31.944 Deallocated/Unwritten Error: Supported 00:07:31.944 Deallocated Read Value: All 0x00 00:07:31.944 Deallocate in Write Zeroes: Not Supported 00:07:31.944 Deallocated Guard Field: 0xFFFF 00:07:31.944 Flush: Supported 00:07:31.944 Reservation: Not Supported 00:07:31.944 Metadata Transferred as: Separate Metadata Buffer 00:07:31.944 Namespace Sharing Capabilities: Private 00:07:31.944 Size (in LBAs): 1548666 (5GiB) 00:07:31.944 Capacity (in LBAs): 1548666 (5GiB) 00:07:31.944 Utilization (in LBAs): 1548666 (5GiB) 00:07:31.944 Thin Provisioning: Not Supported 00:07:31.944 Per-NS Atomic Units: No 00:07:31.944 Maximum Single Source Range Length: 128 00:07:31.944 Maximum Copy Length: 128 00:07:31.944 Maximum Source Range Count: 128 00:07:31.944 NGUID/EUI64 Never Reused: No 00:07:31.944 Namespace Write Protected: No 00:07:31.944 Number of LBA Formats: 8 00:07:31.944 Current LBA Format: LBA Format #07 00:07:31.944 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.944 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.944 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.944 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.944 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.944 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.944 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.944 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.944 00:07:31.944 NVM Specific Namespace Data 00:07:31.944 =========================== 00:07:31.944 Logical Block Storage Tag Mask: 0 00:07:31.944 Protection Information Capabilities: 00:07:31.944 16b Guard Protection Information Storage Tag Support: No 00:07:31.944 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.944 Storage Tag Check Read Support: No 00:07:31.945 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.945 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.945 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.945 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.945 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.945 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.945 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.945 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.945 ===================================================== 00:07:31.945 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:31.945 ===================================================== 00:07:31.945 Controller Capabilities/Features 00:07:31.945 ================================ 00:07:31.945 Vendor ID: 1b36 00:07:31.945 Subsystem Vendor ID: 1af4 00:07:31.945 Serial Number: 12341 00:07:31.945 Model Number: QEMU NVMe Ctrl 00:07:31.945 Firmware Version: 8.0.0 00:07:31.945 Recommended Arb Burst: 6 00:07:31.945 IEEE OUI Identifier: 00 54 52 00:07:31.945 Multi-path I/O 00:07:31.945 May have multiple subsystem ports: No 00:07:31.945 May have multiple controllers: No 00:07:31.945 Associated with SR-IOV VF: No 00:07:31.945 Max Data Transfer Size: 524288 00:07:31.945 Max Number of Namespaces: 256 00:07:31.945 Max Number of I/O Queues: 64 00:07:31.945 NVMe Specification Version (VS): 1.4 00:07:31.945 NVMe Specification Version (Identify): 1.4 00:07:31.945 Maximum Queue Entries: 2048 00:07:31.945 Contiguous Queues Required: Yes 00:07:31.945 Arbitration Mechanisms Supported 00:07:31.945 Weighted Round Robin: Not Supported 00:07:31.945 Vendor Specific: Not Supported 00:07:31.945 Reset Timeout: 7500 ms 00:07:31.945 Doorbell Stride: 4 bytes 00:07:31.945 NVM Subsystem Reset: Not Supported 00:07:31.945 Command Sets Supported 00:07:31.945 NVM Command Set: Supported 00:07:31.945 Boot Partition: Not Supported 00:07:31.945 Memory Page Size Minimum: 4096 bytes 00:07:31.945 Memory Page Size Maximum: 65536 bytes 00:07:31.945 Persistent Memory Region: Not Supported 00:07:31.945 Optional Asynchronous Events Supported 00:07:31.945 Namespace Attribute Notices: Supported 00:07:31.945 Firmware Activation Notices: Not Supported 00:07:31.945 ANA Change Notices: Not Supported 00:07:31.945 PLE Aggregate Log Change Notices: Not Supported 00:07:31.945 LBA Status Info Alert Notices: Not Supported 00:07:31.945 EGE Aggregate Log Change Notices: Not Supported 00:07:31.945 Normal NVM Subsystem Shutdown event: Not Supported 00:07:31.945 Zone Descriptor Change Notices: Not Supported 00:07:31.945 Discovery Log Change Notices: Not Supported 00:07:31.945 Controller Attributes 00:07:31.945 128-bit Host Identifier: Not Supported 00:07:31.945 Non-Operational Permissive Mode: Not Supported 00:07:31.945 NVM Sets: Not Supported 00:07:31.945 Read Recovery Levels: Not Supported 00:07:31.945 Endurance Groups: Not Supported 00:07:31.945 Predictable Latency Mode: Not Supported 00:07:31.945 Traffic Based Keep ALive: Not Supported 00:07:31.945 Namespace Granularity: Not Supported 00:07:31.945 SQ Associations: Not Supported 00:07:31.945 UUID List: Not Supported 00:07:31.945 Multi-Domain Subsystem: Not Supported 00:07:31.945 Fixed Capacity Management: Not Supported 00:07:31.945 Variable Capacity Management: Not Supported 00:07:31.945 Delete Endurance Group: Not Supported 00:07:31.945 Delete NVM Set: Not Supported 00:07:31.945 Extended LBA Formats Supported: Supported 00:07:31.945 Flexible Data Placement Supported: Not Supported 00:07:31.945 00:07:31.945 Controller Memory Buffer Support 00:07:31.945 ================================ 00:07:31.945 Supported: No 00:07:31.945 00:07:31.945 Persistent Memory Region Support 00:07:31.945 ================================ 00:07:31.945 Supported: No 00:07:31.945 00:07:31.945 Admin Command Set Attributes 00:07:31.945 ============================ 00:07:31.945 Security Send/Receive: Not Supported 00:07:31.945 Format NVM: Supported 00:07:31.945 Firmware Activate/Download: Not Supported 00:07:31.945 Namespace Management: Supported 00:07:31.945 Device Self-Test: Not Supported 00:07:31.945 Directives: Supported 00:07:31.945 NVMe-MI: Not Supported 00:07:31.945 Virtualization Management: Not Supported 00:07:31.945 Doorbell Buffer Config: Supported 00:07:31.945 Get LBA Status Capability: Not Supported 00:07:31.945 Command & Feature Lockdown Capability: Not Supported 00:07:31.945 Abort Command Limit: 4 00:07:31.945 Async Event Request Limit: 4 00:07:31.945 Number of Firmware Slots: N/A 00:07:31.945 Firmware Slot 1 Read-Only: N/A 00:07:31.945 Firmware Activation Without Reset: N/A 00:07:31.945 Multiple Update Detection Support: N/A 00:07:31.945 Firmware Update Granularity: No Information Provided 00:07:31.945 Per-Namespace SMART Log: Yes 00:07:31.945 Asymmetric Namespace Access Log Page: Not Supported 00:07:31.945 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:31.945 Command Effects Log Page: Supported 00:07:31.945 Get Log Page Extended Data: Supported 00:07:31.945 Telemetry Log Pages: Not Supported 00:07:31.945 Persistent Event Log Pages: Not Supported 00:07:31.945 Supported Log Pages Log Page: May Support 00:07:31.945 Commands Supported & Effects Log Page: Not Supported 00:07:31.945 Feature Identifiers & Effects Log Page:May Support 00:07:31.945 NVMe-MI Commands & Effects Log Page: May Support 00:07:31.945 Data Area 4 for Telemetry Log: Not Supported 00:07:31.945 Error Log Page Entries Supported: 1 00:07:31.945 Keep Alive: Not Supported 00:07:31.945 00:07:31.945 NVM Command Set Attributes 00:07:31.945 ========================== 00:07:31.945 Submission Queue Entry Size 00:07:31.945 Max: 64 00:07:31.945 Min: 64 00:07:31.945 Completion Queue Entry Size 00:07:31.945 Max: 16 00:07:31.945 Min: 16 00:07:31.945 Number of Namespaces: 256 00:07:31.945 Compare Command: Supported 00:07:31.945 Write Uncorrectable Command: Not Supported 00:07:31.945 Dataset Management Command: Supported 00:07:31.945 Write Zeroes Command: Supported 00:07:31.945 Set Features Save Field: Supported 00:07:31.945 Reservations: Not Supported 00:07:31.945 Timestamp: Supported 00:07:31.945 Copy: Supported 00:07:31.945 Volatile Write Cache: Present 00:07:31.945 Atomic Write Unit (Normal): 1 00:07:31.945 Atomic Write Unit (PFail): 1 00:07:31.945 Atomic Compare & Write Unit: 1 00:07:31.945 Fused Compare & Write: Not Supported 00:07:31.945 Scatter-Gather List 00:07:31.945 SGL Command Set: Supported 00:07:31.945 SGL Keyed: Not Supported 00:07:31.945 SGL Bit Bucket Descriptor: Not Supported 00:07:31.945 SGL Metadata Pointer: Not Supported 00:07:31.945 Oversized SGL: Not Supported 00:07:31.945 SGL Metadata Address: Not Supported 00:07:31.945 SGL Offset: Not Supported 00:07:31.945 Transport SGL Data Block: Not Supported 00:07:31.945 Replay Protected Memory Block: Not Supported 00:07:31.945 00:07:31.945 Firmware Slot Information 00:07:31.945 ========================= 00:07:31.945 Active slot: 1 00:07:31.945 Slot 1 Firmware Revision: 1.0 00:07:31.945 00:07:31.945 00:07:31.945 Commands Supported and Effects 00:07:31.945 ============================== 00:07:31.945 Admin Commands 00:07:31.945 -------------- 00:07:31.945 Delete I/O Submission Queue (00h): Supported 00:07:31.945 Create I/O Submission Queue (01h): Supported 00:07:31.945 Get Log Page (02h): Supported 00:07:31.945 Delete I/O Completion Queue (04h): Supported 00:07:31.945 Create I/O Completion Queue (05h): Supported 00:07:31.945 Identify (06h): Supported 00:07:31.945 Abort (08h): Supported 00:07:31.945 Set Features (09h): Supported 00:07:31.945 Get Features (0Ah): Supported 00:07:31.945 Asynchronous Event Request (0Ch): Supported 00:07:31.945 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:31.945 Directive Send (19h): Supported 00:07:31.945 Directive Receive (1Ah): Supported 00:07:31.945 Virtualization Management (1Ch): Supported 00:07:31.945 Doorbell Buffer Config (7Ch): Supported 00:07:31.945 Format NVM (80h): Supported LBA-Change 00:07:31.945 I/O Commands 00:07:31.945 ------------ 00:07:31.945 Flush (00h): Supported LBA-Change 00:07:31.945 Write (01h): Supported LBA-Change 00:07:31.945 Read (02h): Supported 00:07:31.945 Compare (05h): Supported 00:07:31.945 Write Zeroes (08h): Supported LBA-Change 00:07:31.945 Dataset Management (09h): Supported LBA-Change 00:07:31.945 Unknown (0Ch): Supported 00:07:31.945 Unknown (12h): Supported 00:07:31.945 Copy (19h): Supported LBA-Change 00:07:31.945 Unknown (1Dh): Supported LBA-Change 00:07:31.945 00:07:31.945 Error Log 00:07:31.945 ========= 00:07:31.946 00:07:31.946 Arbitration 00:07:31.946 =========== 00:07:31.946 Arbitration Burst: no limit 00:07:31.946 00:07:31.946 Power Management 00:07:31.946 ================ 00:07:31.946 Number of Power States: 1 00:07:31.946 Current Power State: Power State #0 00:07:31.946 Power State #0: 00:07:31.946 Max Power: 25.00 W 00:07:31.946 Non-Operational State: Operational 00:07:31.946 Entry Latency: 16 microseconds 00:07:31.946 Exit Latency: 4 microseconds 00:07:31.946 Relative Read Throughput: 0 00:07:31.946 Relative Read Latency: 0 00:07:31.946 Relative Write Throughput: 0 00:07:31.946 Relative Write Latency: 0 00:07:31.946 Idle Power: Not Reported 00:07:31.946 Active Power: Not Reported 00:07:31.946 Non-Operational Permissive Mode: Not Supported 00:07:31.946 00:07:31.946 Health Information 00:07:31.946 ================== 00:07:31.946 Critical Warnings: 00:07:31.946 Available Spare Space: OK 00:07:31.946 Temperature: OK 00:07:31.946 Device Reliability: OK 00:07:31.946 Read Only: No 00:07:31.946 Volatile Memory Backup: OK 00:07:31.946 Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.946 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:31.946 Available Spare: 0% 00:07:31.946 Available Spare Threshold: 0% 00:07:31.946 Life Percentage Used: 0% 00:07:31.946 Data Units Read: 1094 00:07:31.946 Data Units Written: 961 00:07:31.946 Host Read Commands: 55772 00:07:31.946 Host Write Commands: 54555 00:07:31.946 Controller Busy Time: 0 minutes 00:07:31.946 Power Cycles: 0 00:07:31.946 Power On Hours: 0 hours 00:07:31.946 Unsafe Shutdowns: 0 00:07:31.946 Unrecoverable Media Errors: 0 00:07:31.946 Lifetime Error Log Entries: 0 00:07:31.946 Warning Temperature Time: 0 minutes 00:07:31.946 Critical Temperature Time: 0 minutes 00:07:31.946 00:07:31.946 Number of Queues 00:07:31.946 ================ 00:07:31.946 Number of I/O Submission Queues: 64 00:07:31.946 Number of I/O Completion Queues: 64 00:07:31.946 00:07:31.946 ZNS Specific Controller Data 00:07:31.946 ============================ 00:07:31.946 Zone Append Size Limit: 0 00:07:31.946 00:07:31.946 00:07:31.946 Active Namespaces 00:07:31.946 ================= 00:07:31.946 Namespace ID:1 00:07:31.946 Error Recovery Timeout: Unlimited 00:07:31.946 Command Set Identifier: NVM (00h) 00:07:31.946 Deallocate: Supported 00:07:31.946 Deallocated/Unwritten Error: Supported 00:07:31.946 Deallocated Read Value: All 0x00 00:07:31.946 Deallocate in Write Zeroes: Not Supported 00:07:31.946 Deallocated Guard Field: 0xFFFF 00:07:31.946 Flush: Supported 00:07:31.946 Reservation: Not Supported 00:07:31.946 Namespace Sharing Capabilities: Private 00:07:31.946 Size (in LBAs): 1310720 (5GiB) 00:07:31.946 Capacity (in LBAs): 1310720 (5GiB) 00:07:31.946 Utilization (in LBAs): 1310720 (5GiB) 00:07:31.946 Thin Provisioning: Not Supported 00:07:31.946 Per-NS Atomic Units: No 00:07:31.946 Maximum Single Source Range Length: 128 00:07:31.946 Maximum Copy Length: 128 00:07:31.946 Maximum Source Range Count: 128 00:07:31.946 NGUID/EUI64 Never Reused: No 00:07:31.946 Namespace Write Protected: No 00:07:31.946 Number of LBA Formats: 8 00:07:31.946 Current LBA Format: LBA Format #04 00:07:31.946 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.946 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.946 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.946 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.946 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.946 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.946 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.946 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.946 00:07:31.946 NVM Specific Namespace Data 00:07:31.946 =========================== 00:07:31.946 Logical Block Storage Tag Mask: 0 00:07:31.946 Protection Information Capabilities: 00:07:31.946 16b Guard Protection Information Storage Tag Support: No 00:07:31.946 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.946 Storage Tag Check Read Support: No 00:07:31.946 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.946 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.946 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.946 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.946 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.946 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.946 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.946 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.946 ===================================================== 00:07:31.946 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:31.946 ===================================================== 00:07:31.946 Controller Capabilities/Features 00:07:31.946 ================================ 00:07:31.946 Vendor ID: 1b36 00:07:31.946 Subsystem Vendor ID: 1af4 00:07:31.946 Serial Number: 12343 00:07:31.946 Model Number: QEMU NVMe Ctrl 00:07:31.946 Firmware Version: 8.0.0 00:07:31.946 Recommended Arb Burst: 6 00:07:31.946 IEEE OUI Identifier: 00 54 52 00:07:31.946 Multi-path I/O 00:07:31.946 May have multiple subsystem ports: No 00:07:31.946 May have multiple controllers: Yes 00:07:31.946 Associated with SR-IOV VF: No 00:07:31.946 Max Data Transfer Size: 524288 00:07:31.946 Max Number of Namespaces: 256 00:07:31.946 Max Number of I/O Queues: 64 00:07:31.946 NVMe Specification Version (VS): 1.4 00:07:31.946 NVMe Specification Version (Identify): 1.4 00:07:31.946 Maximum Queue Entries: 2048 00:07:31.946 Contiguous Queues Required: Yes 00:07:31.946 Arbitration Mechanisms Supported 00:07:31.946 Weighted Round Robin: Not Supported 00:07:31.946 Vendor Specific: Not Supported 00:07:31.946 Reset Timeout: 7500 ms 00:07:31.946 Doorbell Stride: 4 bytes 00:07:31.946 NVM Subsystem Reset: Not Supported 00:07:31.946 Command Sets Supported 00:07:31.946 NVM Command Set: Supported 00:07:31.946 Boot Partition: Not Supported 00:07:31.946 Memory Page Size Minimum: 4096 bytes 00:07:31.946 Memory Page Size Maximum: 65536 bytes 00:07:31.946 Persistent Memory Region: Not Supported 00:07:31.946 Optional Asynchronous Events Supported 00:07:31.946 Namespace Attribute Notices: Supported 00:07:31.946 Firmware Activation Notices: Not Supported 00:07:31.946 ANA Change Notices: Not Supported 00:07:31.946 PLE Aggregate Log Change Notices: Not Supported 00:07:31.946 LBA Status Info Alert Notices: Not Supported 00:07:31.946 EGE Aggregate Log Change Notices: Not Supported 00:07:31.946 Normal NVM Subsystem Shutdown event: Not Supported 00:07:31.946 Zone Descriptor Change Notices: Not Supported 00:07:31.946 Discovery Log Change Notices: Not Supported 00:07:31.946 Controller Attributes 00:07:31.946 128-bit Host Identifier: Not Supported 00:07:31.946 Non-Operational Permissive Mode: Not Supported 00:07:31.946 NVM Sets: Not Supported 00:07:31.946 Read Recovery Levels: Not Supported 00:07:31.946 Endurance Groups: Supported 00:07:31.946 Predictable Latency Mode: Not Supported 00:07:31.946 Traffic Based Keep ALive: Not Supported 00:07:31.946 Namespace Granularity: Not Supported 00:07:31.946 SQ Associations: Not Supported 00:07:31.946 UUID List: Not Supported 00:07:31.946 Multi-Domain Subsystem: Not Supported 00:07:31.946 Fixed Capacity Management: Not Supported 00:07:31.946 Variable Capacity Management: Not Supported 00:07:31.946 Delete Endurance Group: Not Supported 00:07:31.946 Delete NVM Set: Not Supported 00:07:31.946 Extended LBA Formats Supported: Supported 00:07:31.946 Flexible Data Placement Supported: Supported 00:07:31.946 00:07:31.946 Controller Memory Buffer Support 00:07:31.946 ================================ 00:07:31.946 Supported: No 00:07:31.946 00:07:31.946 Persistent Memory Region Support 00:07:31.946 ================================ 00:07:31.946 Supported: No 00:07:31.946 00:07:31.946 Admin Command Set Attributes 00:07:31.947 ============================ 00:07:31.947 Security Send/Receive: Not Supported 00:07:31.947 Format NVM: Supported 00:07:31.947 Firmware Activate/Download: Not Supported 00:07:31.947 Namespace Management: Supported 00:07:31.947 Device Self-Test: Not Supported 00:07:31.947 Directives: Supported 00:07:31.947 NVMe-MI: Not Supported 00:07:31.947 Virtualization Management: Not Supported 00:07:31.947 Doorbell Buffer Config: Supported 00:07:31.947 Get LBA Status Capability: Not Supported 00:07:31.947 Command & Feature Lockdown Capability: Not Supported 00:07:31.947 Abort Command Limit: 4 00:07:31.947 Async Event Request Limit: 4 00:07:31.947 Number of Firmware Slots: N/A 00:07:31.947 Firmware Slot 1 Read-Only: N/A 00:07:31.947 Firmware Activation Without Reset: N/A 00:07:31.947 Multiple Update Detection Support: N/A 00:07:31.947 Firmware Update Granularity: No Information Provided 00:07:31.947 Per-Namespace SMART Log: Yes 00:07:31.947 Asymmetric Namespace Access Log Page: Not Supported 00:07:31.947 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:31.947 Command Effects Log Page: Supported 00:07:31.947 Get Log Page Extended Data: Supported 00:07:31.947 Telemetry Log Pages: Not Supported 00:07:31.947 Persistent Event Log Pages: Not Supported 00:07:31.947 Supported Log Pages Log Page: May Support 00:07:31.947 Commands Supported & Effects Log Page: Not Supported 00:07:31.947 Feature Identifiers & Effec[2024-12-16 12:19:39.011780] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64586 terminated unexpected 00:07:31.947 [2024-12-16 12:19:39.012597] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64586 terminated unexpected 00:07:31.947 [2024-12-16 12:19:39.013143] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64586 terminated unexpected 00:07:31.947 ts Log Page:May Support 00:07:31.947 NVMe-MI Commands & Effects Log Page: May Support 00:07:31.947 Data Area 4 for Telemetry Log: Not Supported 00:07:31.947 Error Log Page Entries Supported: 1 00:07:31.947 Keep Alive: Not Supported 00:07:31.947 00:07:31.947 NVM Command Set Attributes 00:07:31.947 ========================== 00:07:31.947 Submission Queue Entry Size 00:07:31.947 Max: 64 00:07:31.947 Min: 64 00:07:31.947 Completion Queue Entry Size 00:07:31.947 Max: 16 00:07:31.947 Min: 16 00:07:31.947 Number of Namespaces: 256 00:07:31.947 Compare Command: Supported 00:07:31.947 Write Uncorrectable Command: Not Supported 00:07:31.947 Dataset Management Command: Supported 00:07:31.947 Write Zeroes Command: Supported 00:07:31.947 Set Features Save Field: Supported 00:07:31.947 Reservations: Not Supported 00:07:31.947 Timestamp: Supported 00:07:31.947 Copy: Supported 00:07:31.947 Volatile Write Cache: Present 00:07:31.947 Atomic Write Unit (Normal): 1 00:07:31.947 Atomic Write Unit (PFail): 1 00:07:31.947 Atomic Compare & Write Unit: 1 00:07:31.947 Fused Compare & Write: Not Supported 00:07:31.947 Scatter-Gather List 00:07:31.947 SGL Command Set: Supported 00:07:31.947 SGL Keyed: Not Supported 00:07:31.947 SGL Bit Bucket Descriptor: Not Supported 00:07:31.947 SGL Metadata Pointer: Not Supported 00:07:31.947 Oversized SGL: Not Supported 00:07:31.947 SGL Metadata Address: Not Supported 00:07:31.947 SGL Offset: Not Supported 00:07:31.947 Transport SGL Data Block: Not Supported 00:07:31.947 Replay Protected Memory Block: Not Supported 00:07:31.947 00:07:31.947 Firmware Slot Information 00:07:31.947 ========================= 00:07:31.947 Active slot: 1 00:07:31.947 Slot 1 Firmware Revision: 1.0 00:07:31.947 00:07:31.947 00:07:31.947 Commands Supported and Effects 00:07:31.947 ============================== 00:07:31.947 Admin Commands 00:07:31.947 -------------- 00:07:31.947 Delete I/O Submission Queue (00h): Supported 00:07:31.947 Create I/O Submission Queue (01h): Supported 00:07:31.947 Get Log Page (02h): Supported 00:07:31.947 Delete I/O Completion Queue (04h): Supported 00:07:31.947 Create I/O Completion Queue (05h): Supported 00:07:31.947 Identify (06h): Supported 00:07:31.947 Abort (08h): Supported 00:07:31.947 Set Features (09h): Supported 00:07:31.947 Get Features (0Ah): Supported 00:07:31.947 Asynchronous Event Request (0Ch): Supported 00:07:31.947 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:31.947 Directive Send (19h): Supported 00:07:31.947 Directive Receive (1Ah): Supported 00:07:31.947 Virtualization Management (1Ch): Supported 00:07:31.947 Doorbell Buffer Config (7Ch): Supported 00:07:31.947 Format NVM (80h): Supported LBA-Change 00:07:31.947 I/O Commands 00:07:31.947 ------------ 00:07:31.947 Flush (00h): Supported LBA-Change 00:07:31.947 Write (01h): Supported LBA-Change 00:07:31.947 Read (02h): Supported 00:07:31.947 Compare (05h): Supported 00:07:31.947 Write Zeroes (08h): Supported LBA-Change 00:07:31.947 Dataset Management (09h): Supported LBA-Change 00:07:31.947 Unknown (0Ch): Supported 00:07:31.947 Unknown (12h): Supported 00:07:31.947 Copy (19h): Supported LBA-Change 00:07:31.947 Unknown (1Dh): Supported LBA-Change 00:07:31.947 00:07:31.947 Error Log 00:07:31.947 ========= 00:07:31.947 00:07:31.947 Arbitration 00:07:31.947 =========== 00:07:31.947 Arbitration Burst: no limit 00:07:31.947 00:07:31.947 Power Management 00:07:31.947 ================ 00:07:31.947 Number of Power States: 1 00:07:31.947 Current Power State: Power State #0 00:07:31.947 Power State #0: 00:07:31.947 Max Power: 25.00 W 00:07:31.947 Non-Operational State: Operational 00:07:31.947 Entry Latency: 16 microseconds 00:07:31.947 Exit Latency: 4 microseconds 00:07:31.947 Relative Read Throughput: 0 00:07:31.947 Relative Read Latency: 0 00:07:31.947 Relative Write Throughput: 0 00:07:31.947 Relative Write Latency: 0 00:07:31.947 Idle Power: Not Reported 00:07:31.947 Active Power: Not Reported 00:07:31.947 Non-Operational Permissive Mode: Not Supported 00:07:31.947 00:07:31.947 Health Information 00:07:31.947 ================== 00:07:31.947 Critical Warnings: 00:07:31.947 Available Spare Space: OK 00:07:31.947 Temperature: OK 00:07:31.947 Device Reliability: OK 00:07:31.947 Read Only: No 00:07:31.947 Volatile Memory Backup: OK 00:07:31.947 Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.947 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:31.947 Available Spare: 0% 00:07:31.947 Available Spare Threshold: 0% 00:07:31.947 Life Percentage Used: 0% 00:07:31.947 Data Units Read: 830 00:07:31.947 Data Units Written: 759 00:07:31.947 Host Read Commands: 37999 00:07:31.947 Host Write Commands: 37422 00:07:31.947 Controller Busy Time: 0 minutes 00:07:31.947 Power Cycles: 0 00:07:31.947 Power On Hours: 0 hours 00:07:31.947 Unsafe Shutdowns: 0 00:07:31.947 Unrecoverable Media Errors: 0 00:07:31.947 Lifetime Error Log Entries: 0 00:07:31.947 Warning Temperature Time: 0 minutes 00:07:31.947 Critical Temperature Time: 0 minutes 00:07:31.947 00:07:31.947 Number of Queues 00:07:31.947 ================ 00:07:31.947 Number of I/O Submission Queues: 64 00:07:31.947 Number of I/O Completion Queues: 64 00:07:31.947 00:07:31.947 ZNS Specific Controller Data 00:07:31.947 ============================ 00:07:31.947 Zone Append Size Limit: 0 00:07:31.947 00:07:31.947 00:07:31.947 Active Namespaces 00:07:31.947 ================= 00:07:31.947 Namespace ID:1 00:07:31.947 Error Recovery Timeout: Unlimited 00:07:31.947 Command Set Identifier: NVM (00h) 00:07:31.947 Deallocate: Supported 00:07:31.947 Deallocated/Unwritten Error: Supported 00:07:31.947 Deallocated Read Value: All 0x00 00:07:31.947 Deallocate in Write Zeroes: Not Supported 00:07:31.947 Deallocated Guard Field: 0xFFFF 00:07:31.947 Flush: Supported 00:07:31.947 Reservation: Not Supported 00:07:31.947 Namespace Sharing Capabilities: Multiple Controllers 00:07:31.947 Size (in LBAs): 262144 (1GiB) 00:07:31.947 Capacity (in LBAs): 262144 (1GiB) 00:07:31.947 Utilization (in LBAs): 262144 (1GiB) 00:07:31.947 Thin Provisioning: Not Supported 00:07:31.947 Per-NS Atomic Units: No 00:07:31.947 Maximum Single Source Range Length: 128 00:07:31.947 Maximum Copy Length: 128 00:07:31.947 Maximum Source Range Count: 128 00:07:31.947 NGUID/EUI64 Never Reused: No 00:07:31.947 Namespace Write Protected: No 00:07:31.947 Endurance group ID: 1 00:07:31.947 Number of LBA Formats: 8 00:07:31.947 Current LBA Format: LBA Format #04 00:07:31.947 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.948 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.948 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.948 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.948 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.948 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.948 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.948 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.948 00:07:31.948 Get Feature FDP: 00:07:31.948 ================ 00:07:31.948 Enabled: Yes 00:07:31.948 FDP configuration index: 0 00:07:31.948 00:07:31.948 FDP configurations log page 00:07:31.948 =========================== 00:07:31.948 Number of FDP configurations: 1 00:07:31.948 Version: 0 00:07:31.948 Size: 112 00:07:31.948 FDP Configuration Descriptor: 0 00:07:31.948 Descriptor Size: 96 00:07:31.948 Reclaim Group Identifier format: 2 00:07:31.948 FDP Volatile Write Cache: Not Present 00:07:31.948 FDP Configuration: Valid 00:07:31.948 Vendor Specific Size: 0 00:07:31.948 Number of Reclaim Groups: 2 00:07:31.948 Number of Recalim Unit Handles: 8 00:07:31.948 Max Placement Identifiers: 128 00:07:31.948 Number of Namespaces Suppprted: 256 00:07:31.948 Reclaim unit Nominal Size: 6000000 bytes 00:07:31.948 Estimated Reclaim Unit Time Limit: Not Reported 00:07:31.948 RUH Desc #000: RUH Type: Initially Isolated 00:07:31.948 RUH Desc #001: RUH Type: Initially Isolated 00:07:31.948 RUH Desc #002: RUH Type: Initially Isolated 00:07:31.948 RUH Desc #003: RUH Type: Initially Isolated 00:07:31.948 RUH Desc #004: RUH Type: Initially Isolated 00:07:31.948 RUH Desc #005: RUH Type: Initially Isolated 00:07:31.948 RUH Desc #006: RUH Type: Initially Isolated 00:07:31.948 RUH Desc #007: RUH Type: Initially Isolated 00:07:31.948 00:07:31.948 FDP reclaim unit handle usage log page 00:07:31.948 ====================================== 00:07:31.948 Number of Reclaim Unit Handles: 8 00:07:31.948 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:31.948 RUH Usage Desc #001: RUH Attributes: Unused 00:07:31.948 RUH Usage Desc #002: RUH Attributes: Unused 00:07:31.948 RUH Usage Desc #003: RUH Attributes: Unused 00:07:31.948 RUH Usage Desc #004: RUH Attributes: Unused 00:07:31.948 RUH Usage Desc #005: RUH Attributes: Unused 00:07:31.948 RUH Usage Desc #006: RUH Attributes: Unused 00:07:31.948 RUH Usage Desc #007: RUH Attributes: Unused 00:07:31.948 00:07:31.948 FDP statistics log page 00:07:31.948 ======================= 00:07:31.948 Host bytes with metadata written: 490708992 00:07:31.948 Medi[2024-12-16 12:19:39.015226] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64586 terminated unexpected 00:07:31.948 a bytes with metadata written: 490754048 00:07:31.948 Media bytes erased: 0 00:07:31.948 00:07:31.948 FDP events log page 00:07:31.948 =================== 00:07:31.948 Number of FDP events: 0 00:07:31.948 00:07:31.948 NVM Specific Namespace Data 00:07:31.948 =========================== 00:07:31.948 Logical Block Storage Tag Mask: 0 00:07:31.948 Protection Information Capabilities: 00:07:31.948 16b Guard Protection Information Storage Tag Support: No 00:07:31.948 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.948 Storage Tag Check Read Support: No 00:07:31.948 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.948 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.948 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.948 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.948 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.948 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.948 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.948 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.948 ===================================================== 00:07:31.948 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:31.948 ===================================================== 00:07:31.948 Controller Capabilities/Features 00:07:31.948 ================================ 00:07:31.948 Vendor ID: 1b36 00:07:31.948 Subsystem Vendor ID: 1af4 00:07:31.948 Serial Number: 12342 00:07:31.948 Model Number: QEMU NVMe Ctrl 00:07:31.948 Firmware Version: 8.0.0 00:07:31.948 Recommended Arb Burst: 6 00:07:31.948 IEEE OUI Identifier: 00 54 52 00:07:31.948 Multi-path I/O 00:07:31.948 May have multiple subsystem ports: No 00:07:31.948 May have multiple controllers: No 00:07:31.948 Associated with SR-IOV VF: No 00:07:31.948 Max Data Transfer Size: 524288 00:07:31.948 Max Number of Namespaces: 256 00:07:31.948 Max Number of I/O Queues: 64 00:07:31.948 NVMe Specification Version (VS): 1.4 00:07:31.948 NVMe Specification Version (Identify): 1.4 00:07:31.948 Maximum Queue Entries: 2048 00:07:31.948 Contiguous Queues Required: Yes 00:07:31.948 Arbitration Mechanisms Supported 00:07:31.948 Weighted Round Robin: Not Supported 00:07:31.948 Vendor Specific: Not Supported 00:07:31.948 Reset Timeout: 7500 ms 00:07:31.948 Doorbell Stride: 4 bytes 00:07:31.948 NVM Subsystem Reset: Not Supported 00:07:31.948 Command Sets Supported 00:07:31.948 NVM Command Set: Supported 00:07:31.948 Boot Partition: Not Supported 00:07:31.948 Memory Page Size Minimum: 4096 bytes 00:07:31.948 Memory Page Size Maximum: 65536 bytes 00:07:31.948 Persistent Memory Region: Not Supported 00:07:31.948 Optional Asynchronous Events Supported 00:07:31.948 Namespace Attribute Notices: Supported 00:07:31.948 Firmware Activation Notices: Not Supported 00:07:31.948 ANA Change Notices: Not Supported 00:07:31.948 PLE Aggregate Log Change Notices: Not Supported 00:07:31.948 LBA Status Info Alert Notices: Not Supported 00:07:31.948 EGE Aggregate Log Change Notices: Not Supported 00:07:31.948 Normal NVM Subsystem Shutdown event: Not Supported 00:07:31.948 Zone Descriptor Change Notices: Not Supported 00:07:31.948 Discovery Log Change Notices: Not Supported 00:07:31.948 Controller Attributes 00:07:31.948 128-bit Host Identifier: Not Supported 00:07:31.948 Non-Operational Permissive Mode: Not Supported 00:07:31.948 NVM Sets: Not Supported 00:07:31.948 Read Recovery Levels: Not Supported 00:07:31.948 Endurance Groups: Not Supported 00:07:31.948 Predictable Latency Mode: Not Supported 00:07:31.948 Traffic Based Keep ALive: Not Supported 00:07:31.948 Namespace Granularity: Not Supported 00:07:31.948 SQ Associations: Not Supported 00:07:31.948 UUID List: Not Supported 00:07:31.948 Multi-Domain Subsystem: Not Supported 00:07:31.948 Fixed Capacity Management: Not Supported 00:07:31.948 Variable Capacity Management: Not Supported 00:07:31.948 Delete Endurance Group: Not Supported 00:07:31.948 Delete NVM Set: Not Supported 00:07:31.948 Extended LBA Formats Supported: Supported 00:07:31.948 Flexible Data Placement Supported: Not Supported 00:07:31.948 00:07:31.948 Controller Memory Buffer Support 00:07:31.948 ================================ 00:07:31.948 Supported: No 00:07:31.948 00:07:31.948 Persistent Memory Region Support 00:07:31.948 ================================ 00:07:31.948 Supported: No 00:07:31.948 00:07:31.948 Admin Command Set Attributes 00:07:31.948 ============================ 00:07:31.948 Security Send/Receive: Not Supported 00:07:31.948 Format NVM: Supported 00:07:31.948 Firmware Activate/Download: Not Supported 00:07:31.948 Namespace Management: Supported 00:07:31.948 Device Self-Test: Not Supported 00:07:31.948 Directives: Supported 00:07:31.948 NVMe-MI: Not Supported 00:07:31.948 Virtualization Management: Not Supported 00:07:31.948 Doorbell Buffer Config: Supported 00:07:31.948 Get LBA Status Capability: Not Supported 00:07:31.948 Command & Feature Lockdown Capability: Not Supported 00:07:31.948 Abort Command Limit: 4 00:07:31.948 Async Event Request Limit: 4 00:07:31.949 Number of Firmware Slots: N/A 00:07:31.949 Firmware Slot 1 Read-Only: N/A 00:07:31.949 Firmware Activation Without Reset: N/A 00:07:31.949 Multiple Update Detection Support: N/A 00:07:31.949 Firmware Update Granularity: No Information Provided 00:07:31.949 Per-Namespace SMART Log: Yes 00:07:31.949 Asymmetric Namespace Access Log Page: Not Supported 00:07:31.949 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:31.949 Command Effects Log Page: Supported 00:07:31.949 Get Log Page Extended Data: Supported 00:07:31.949 Telemetry Log Pages: Not Supported 00:07:31.949 Persistent Event Log Pages: Not Supported 00:07:31.949 Supported Log Pages Log Page: May Support 00:07:31.949 Commands Supported & Effects Log Page: Not Supported 00:07:31.949 Feature Identifiers & Effects Log Page:May Support 00:07:31.949 NVMe-MI Commands & Effects Log Page: May Support 00:07:31.949 Data Area 4 for Telemetry Log: Not Supported 00:07:31.949 Error Log Page Entries Supported: 1 00:07:31.949 Keep Alive: Not Supported 00:07:31.949 00:07:31.949 NVM Command Set Attributes 00:07:31.949 ========================== 00:07:31.949 Submission Queue Entry Size 00:07:31.949 Max: 64 00:07:31.949 Min: 64 00:07:31.949 Completion Queue Entry Size 00:07:31.949 Max: 16 00:07:31.949 Min: 16 00:07:31.949 Number of Namespaces: 256 00:07:31.949 Compare Command: Supported 00:07:31.949 Write Uncorrectable Command: Not Supported 00:07:31.949 Dataset Management Command: Supported 00:07:31.949 Write Zeroes Command: Supported 00:07:31.949 Set Features Save Field: Supported 00:07:31.949 Reservations: Not Supported 00:07:31.949 Timestamp: Supported 00:07:31.949 Copy: Supported 00:07:31.949 Volatile Write Cache: Present 00:07:31.949 Atomic Write Unit (Normal): 1 00:07:31.949 Atomic Write Unit (PFail): 1 00:07:31.949 Atomic Compare & Write Unit: 1 00:07:31.949 Fused Compare & Write: Not Supported 00:07:31.949 Scatter-Gather List 00:07:31.949 SGL Command Set: Supported 00:07:31.949 SGL Keyed: Not Supported 00:07:31.949 SGL Bit Bucket Descriptor: Not Supported 00:07:31.949 SGL Metadata Pointer: Not Supported 00:07:31.949 Oversized SGL: Not Supported 00:07:31.949 SGL Metadata Address: Not Supported 00:07:31.949 SGL Offset: Not Supported 00:07:31.949 Transport SGL Data Block: Not Supported 00:07:31.949 Replay Protected Memory Block: Not Supported 00:07:31.949 00:07:31.949 Firmware Slot Information 00:07:31.949 ========================= 00:07:31.949 Active slot: 1 00:07:31.949 Slot 1 Firmware Revision: 1.0 00:07:31.949 00:07:31.949 00:07:31.949 Commands Supported and Effects 00:07:31.949 ============================== 00:07:31.949 Admin Commands 00:07:31.949 -------------- 00:07:31.949 Delete I/O Submission Queue (00h): Supported 00:07:31.949 Create I/O Submission Queue (01h): Supported 00:07:31.949 Get Log Page (02h): Supported 00:07:31.949 Delete I/O Completion Queue (04h): Supported 00:07:31.949 Create I/O Completion Queue (05h): Supported 00:07:31.949 Identify (06h): Supported 00:07:31.949 Abort (08h): Supported 00:07:31.949 Set Features (09h): Supported 00:07:31.949 Get Features (0Ah): Supported 00:07:31.949 Asynchronous Event Request (0Ch): Supported 00:07:31.949 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:31.949 Directive Send (19h): Supported 00:07:31.949 Directive Receive (1Ah): Supported 00:07:31.949 Virtualization Management (1Ch): Supported 00:07:31.949 Doorbell Buffer Config (7Ch): Supported 00:07:31.949 Format NVM (80h): Supported LBA-Change 00:07:31.949 I/O Commands 00:07:31.949 ------------ 00:07:31.949 Flush (00h): Supported LBA-Change 00:07:31.949 Write (01h): Supported LBA-Change 00:07:31.949 Read (02h): Supported 00:07:31.949 Compare (05h): Supported 00:07:31.949 Write Zeroes (08h): Supported LBA-Change 00:07:31.949 Dataset Management (09h): Supported LBA-Change 00:07:31.949 Unknown (0Ch): Supported 00:07:31.949 Unknown (12h): Supported 00:07:31.949 Copy (19h): Supported LBA-Change 00:07:31.949 Unknown (1Dh): Supported LBA-Change 00:07:31.949 00:07:31.949 Error Log 00:07:31.949 ========= 00:07:31.949 00:07:31.949 Arbitration 00:07:31.949 =========== 00:07:31.949 Arbitration Burst: no limit 00:07:31.949 00:07:31.949 Power Management 00:07:31.949 ================ 00:07:31.949 Number of Power States: 1 00:07:31.949 Current Power State: Power State #0 00:07:31.949 Power State #0: 00:07:31.949 Max Power: 25.00 W 00:07:31.949 Non-Operational State: Operational 00:07:31.949 Entry Latency: 16 microseconds 00:07:31.949 Exit Latency: 4 microseconds 00:07:31.949 Relative Read Throughput: 0 00:07:31.949 Relative Read Latency: 0 00:07:31.949 Relative Write Throughput: 0 00:07:31.949 Relative Write Latency: 0 00:07:31.949 Idle Power: Not Reported 00:07:31.949 Active Power: Not Reported 00:07:31.949 Non-Operational Permissive Mode: Not Supported 00:07:31.949 00:07:31.949 Health Information 00:07:31.949 ================== 00:07:31.949 Critical Warnings: 00:07:31.949 Available Spare Space: OK 00:07:31.949 Temperature: OK 00:07:31.949 Device Reliability: OK 00:07:31.949 Read Only: No 00:07:31.949 Volatile Memory Backup: OK 00:07:31.949 Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.949 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:31.949 Available Spare: 0% 00:07:31.949 Available Spare Threshold: 0% 00:07:31.949 Life Percentage Used: 0% 00:07:31.949 Data Units Read: 2220 00:07:31.949 Data Units Written: 2007 00:07:31.949 Host Read Commands: 111904 00:07:31.949 Host Write Commands: 110173 00:07:31.949 Controller Busy Time: 0 minutes 00:07:31.949 Power Cycles: 0 00:07:31.949 Power On Hours: 0 hours 00:07:31.949 Unsafe Shutdowns: 0 00:07:31.949 Unrecoverable Media Errors: 0 00:07:31.949 Lifetime Error Log Entries: 0 00:07:31.949 Warning Temperature Time: 0 minutes 00:07:31.949 Critical Temperature Time: 0 minutes 00:07:31.949 00:07:31.949 Number of Queues 00:07:31.949 ================ 00:07:31.949 Number of I/O Submission Queues: 64 00:07:31.949 Number of I/O Completion Queues: 64 00:07:31.949 00:07:31.949 ZNS Specific Controller Data 00:07:31.949 ============================ 00:07:31.949 Zone Append Size Limit: 0 00:07:31.949 00:07:31.949 00:07:31.949 Active Namespaces 00:07:31.949 ================= 00:07:31.949 Namespace ID:1 00:07:31.949 Error Recovery Timeout: Unlimited 00:07:31.949 Command Set Identifier: NVM (00h) 00:07:31.949 Deallocate: Supported 00:07:31.949 Deallocated/Unwritten Error: Supported 00:07:31.949 Deallocated Read Value: All 0x00 00:07:31.949 Deallocate in Write Zeroes: Not Supported 00:07:31.949 Deallocated Guard Field: 0xFFFF 00:07:31.949 Flush: Supported 00:07:31.949 Reservation: Not Supported 00:07:31.949 Namespace Sharing Capabilities: Private 00:07:31.949 Size (in LBAs): 1048576 (4GiB) 00:07:31.949 Capacity (in LBAs): 1048576 (4GiB) 00:07:31.949 Utilization (in LBAs): 1048576 (4GiB) 00:07:31.949 Thin Provisioning: Not Supported 00:07:31.949 Per-NS Atomic Units: No 00:07:31.949 Maximum Single Source Range Length: 128 00:07:31.949 Maximum Copy Length: 128 00:07:31.949 Maximum Source Range Count: 128 00:07:31.949 NGUID/EUI64 Never Reused: No 00:07:31.949 Namespace Write Protected: No 00:07:31.949 Number of LBA Formats: 8 00:07:31.949 Current LBA Format: LBA Format #04 00:07:31.949 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.949 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.949 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.949 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.949 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.949 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.949 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.949 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.949 00:07:31.949 NVM Specific Namespace Data 00:07:31.949 =========================== 00:07:31.949 Logical Block Storage Tag Mask: 0 00:07:31.949 Protection Information Capabilities: 00:07:31.949 16b Guard Protection Information Storage Tag Support: No 00:07:31.949 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.949 Storage Tag Check Read Support: No 00:07:31.949 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.949 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.950 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.950 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.950 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.950 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.950 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.950 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.950 Namespace ID:2 00:07:31.950 Error Recovery Timeout: Unlimited 00:07:31.950 Command Set Identifier: NVM (00h) 00:07:31.950 Deallocate: Supported 00:07:31.950 Deallocated/Unwritten Error: Supported 00:07:31.950 Deallocated Read Value: All 0x00 00:07:31.950 Deallocate in Write Zeroes: Not Supported 00:07:31.950 Deallocated Guard Field: 0xFFFF 00:07:31.950 Flush: Supported 00:07:31.950 Reservation: Not Supported 00:07:31.950 Namespace Sharing Capabilities: Private 00:07:31.950 Size (in LBAs): 1048576 (4GiB) 00:07:31.950 Capacity (in LBAs): 1048576 (4GiB) 00:07:31.950 Utilization (in LBAs): 1048576 (4GiB) 00:07:31.950 Thin Provisioning: Not Supported 00:07:31.950 Per-NS Atomic Units: No 00:07:31.950 Maximum Single Source Range Length: 128 00:07:31.950 Maximum Copy Length: 128 00:07:31.950 Maximum Source Range Count: 128 00:07:31.950 NGUID/EUI64 Never Reused: No 00:07:31.950 Namespace Write Protected: No 00:07:31.950 Number of LBA Formats: 8 00:07:31.950 Current LBA Format: LBA Format #04 00:07:31.950 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.950 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.950 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.950 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.950 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.950 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.950 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.950 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.950 00:07:31.950 NVM Specific Namespace Data 00:07:31.950 =========================== 00:07:31.950 Logical Block Storage Tag Mask: 0 00:07:31.950 Protection Information Capabilities: 00:07:31.950 16b Guard Protection Information Storage Tag Support: No 00:07:31.950 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.950 Storage Tag Check Read Support: No 00:07:31.950 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.950 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.950 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.950 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.950 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.950 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.950 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.950 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.950 Namespace ID:3 00:07:31.950 Error Recovery Timeout: Unlimited 00:07:31.950 Command Set Identifier: NVM (00h) 00:07:31.950 Deallocate: Supported 00:07:31.950 Deallocated/Unwritten Error: Supported 00:07:31.950 Deallocated Read Value: All 0x00 00:07:31.950 Deallocate in Write Zeroes: Not Supported 00:07:31.950 Deallocated Guard Field: 0xFFFF 00:07:31.950 Flush: Supported 00:07:31.950 Reservation: Not Supported 00:07:31.950 Namespace Sharing Capabilities: Private 00:07:31.950 Size (in LBAs): 1048576 (4GiB) 00:07:31.950 Capacity (in LBAs): 1048576 (4GiB) 00:07:31.950 Utilization (in LBAs): 1048576 (4GiB) 00:07:31.950 Thin Provisioning: Not Supported 00:07:31.950 Per-NS Atomic Units: No 00:07:31.950 Maximum Single Source Range Length: 128 00:07:31.950 Maximum Copy Length: 128 00:07:31.950 Maximum Source Range Count: 128 00:07:31.950 NGUID/EUI64 Never Reused: No 00:07:31.950 Namespace Write Protected: No 00:07:31.950 Number of LBA Formats: 8 00:07:31.950 Current LBA Format: LBA Format #04 00:07:31.950 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.950 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.950 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.950 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.950 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.950 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.950 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.950 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.950 00:07:31.950 NVM Specific Namespace Data 00:07:31.950 =========================== 00:07:31.950 Logical Block Storage Tag Mask: 0 00:07:31.950 Protection Information Capabilities: 00:07:31.950 16b Guard Protection Information Storage Tag Support: No 00:07:31.950 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.950 Storage Tag Check Read Support: No 00:07:31.950 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.950 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.950 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.950 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.950 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.950 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.950 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.950 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.210 12:19:39 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:32.210 12:19:39 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:32.210 ===================================================== 00:07:32.210 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:32.210 ===================================================== 00:07:32.210 Controller Capabilities/Features 00:07:32.210 ================================ 00:07:32.210 Vendor ID: 1b36 00:07:32.210 Subsystem Vendor ID: 1af4 00:07:32.210 Serial Number: 12340 00:07:32.210 Model Number: QEMU NVMe Ctrl 00:07:32.210 Firmware Version: 8.0.0 00:07:32.210 Recommended Arb Burst: 6 00:07:32.210 IEEE OUI Identifier: 00 54 52 00:07:32.210 Multi-path I/O 00:07:32.210 May have multiple subsystem ports: No 00:07:32.210 May have multiple controllers: No 00:07:32.210 Associated with SR-IOV VF: No 00:07:32.210 Max Data Transfer Size: 524288 00:07:32.210 Max Number of Namespaces: 256 00:07:32.210 Max Number of I/O Queues: 64 00:07:32.210 NVMe Specification Version (VS): 1.4 00:07:32.210 NVMe Specification Version (Identify): 1.4 00:07:32.210 Maximum Queue Entries: 2048 00:07:32.210 Contiguous Queues Required: Yes 00:07:32.210 Arbitration Mechanisms Supported 00:07:32.210 Weighted Round Robin: Not Supported 00:07:32.210 Vendor Specific: Not Supported 00:07:32.210 Reset Timeout: 7500 ms 00:07:32.210 Doorbell Stride: 4 bytes 00:07:32.210 NVM Subsystem Reset: Not Supported 00:07:32.210 Command Sets Supported 00:07:32.210 NVM Command Set: Supported 00:07:32.210 Boot Partition: Not Supported 00:07:32.210 Memory Page Size Minimum: 4096 bytes 00:07:32.210 Memory Page Size Maximum: 65536 bytes 00:07:32.210 Persistent Memory Region: Not Supported 00:07:32.210 Optional Asynchronous Events Supported 00:07:32.210 Namespace Attribute Notices: Supported 00:07:32.210 Firmware Activation Notices: Not Supported 00:07:32.210 ANA Change Notices: Not Supported 00:07:32.210 PLE Aggregate Log Change Notices: Not Supported 00:07:32.210 LBA Status Info Alert Notices: Not Supported 00:07:32.210 EGE Aggregate Log Change Notices: Not Supported 00:07:32.210 Normal NVM Subsystem Shutdown event: Not Supported 00:07:32.210 Zone Descriptor Change Notices: Not Supported 00:07:32.210 Discovery Log Change Notices: Not Supported 00:07:32.210 Controller Attributes 00:07:32.210 128-bit Host Identifier: Not Supported 00:07:32.210 Non-Operational Permissive Mode: Not Supported 00:07:32.210 NVM Sets: Not Supported 00:07:32.210 Read Recovery Levels: Not Supported 00:07:32.210 Endurance Groups: Not Supported 00:07:32.210 Predictable Latency Mode: Not Supported 00:07:32.210 Traffic Based Keep ALive: Not Supported 00:07:32.210 Namespace Granularity: Not Supported 00:07:32.210 SQ Associations: Not Supported 00:07:32.210 UUID List: Not Supported 00:07:32.210 Multi-Domain Subsystem: Not Supported 00:07:32.210 Fixed Capacity Management: Not Supported 00:07:32.210 Variable Capacity Management: Not Supported 00:07:32.210 Delete Endurance Group: Not Supported 00:07:32.210 Delete NVM Set: Not Supported 00:07:32.210 Extended LBA Formats Supported: Supported 00:07:32.210 Flexible Data Placement Supported: Not Supported 00:07:32.210 00:07:32.210 Controller Memory Buffer Support 00:07:32.210 ================================ 00:07:32.210 Supported: No 00:07:32.210 00:07:32.210 Persistent Memory Region Support 00:07:32.210 ================================ 00:07:32.210 Supported: No 00:07:32.210 00:07:32.210 Admin Command Set Attributes 00:07:32.210 ============================ 00:07:32.210 Security Send/Receive: Not Supported 00:07:32.210 Format NVM: Supported 00:07:32.210 Firmware Activate/Download: Not Supported 00:07:32.210 Namespace Management: Supported 00:07:32.210 Device Self-Test: Not Supported 00:07:32.210 Directives: Supported 00:07:32.210 NVMe-MI: Not Supported 00:07:32.210 Virtualization Management: Not Supported 00:07:32.210 Doorbell Buffer Config: Supported 00:07:32.210 Get LBA Status Capability: Not Supported 00:07:32.210 Command & Feature Lockdown Capability: Not Supported 00:07:32.210 Abort Command Limit: 4 00:07:32.210 Async Event Request Limit: 4 00:07:32.210 Number of Firmware Slots: N/A 00:07:32.210 Firmware Slot 1 Read-Only: N/A 00:07:32.210 Firmware Activation Without Reset: N/A 00:07:32.210 Multiple Update Detection Support: N/A 00:07:32.210 Firmware Update Granularity: No Information Provided 00:07:32.210 Per-Namespace SMART Log: Yes 00:07:32.210 Asymmetric Namespace Access Log Page: Not Supported 00:07:32.210 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:32.210 Command Effects Log Page: Supported 00:07:32.210 Get Log Page Extended Data: Supported 00:07:32.210 Telemetry Log Pages: Not Supported 00:07:32.210 Persistent Event Log Pages: Not Supported 00:07:32.210 Supported Log Pages Log Page: May Support 00:07:32.210 Commands Supported & Effects Log Page: Not Supported 00:07:32.210 Feature Identifiers & Effects Log Page:May Support 00:07:32.210 NVMe-MI Commands & Effects Log Page: May Support 00:07:32.210 Data Area 4 for Telemetry Log: Not Supported 00:07:32.210 Error Log Page Entries Supported: 1 00:07:32.210 Keep Alive: Not Supported 00:07:32.210 00:07:32.210 NVM Command Set Attributes 00:07:32.210 ========================== 00:07:32.210 Submission Queue Entry Size 00:07:32.210 Max: 64 00:07:32.210 Min: 64 00:07:32.210 Completion Queue Entry Size 00:07:32.210 Max: 16 00:07:32.210 Min: 16 00:07:32.210 Number of Namespaces: 256 00:07:32.210 Compare Command: Supported 00:07:32.210 Write Uncorrectable Command: Not Supported 00:07:32.210 Dataset Management Command: Supported 00:07:32.210 Write Zeroes Command: Supported 00:07:32.210 Set Features Save Field: Supported 00:07:32.210 Reservations: Not Supported 00:07:32.210 Timestamp: Supported 00:07:32.210 Copy: Supported 00:07:32.211 Volatile Write Cache: Present 00:07:32.211 Atomic Write Unit (Normal): 1 00:07:32.211 Atomic Write Unit (PFail): 1 00:07:32.211 Atomic Compare & Write Unit: 1 00:07:32.211 Fused Compare & Write: Not Supported 00:07:32.211 Scatter-Gather List 00:07:32.211 SGL Command Set: Supported 00:07:32.211 SGL Keyed: Not Supported 00:07:32.211 SGL Bit Bucket Descriptor: Not Supported 00:07:32.211 SGL Metadata Pointer: Not Supported 00:07:32.211 Oversized SGL: Not Supported 00:07:32.211 SGL Metadata Address: Not Supported 00:07:32.211 SGL Offset: Not Supported 00:07:32.211 Transport SGL Data Block: Not Supported 00:07:32.211 Replay Protected Memory Block: Not Supported 00:07:32.211 00:07:32.211 Firmware Slot Information 00:07:32.211 ========================= 00:07:32.211 Active slot: 1 00:07:32.211 Slot 1 Firmware Revision: 1.0 00:07:32.211 00:07:32.211 00:07:32.211 Commands Supported and Effects 00:07:32.211 ============================== 00:07:32.211 Admin Commands 00:07:32.211 -------------- 00:07:32.211 Delete I/O Submission Queue (00h): Supported 00:07:32.211 Create I/O Submission Queue (01h): Supported 00:07:32.211 Get Log Page (02h): Supported 00:07:32.211 Delete I/O Completion Queue (04h): Supported 00:07:32.211 Create I/O Completion Queue (05h): Supported 00:07:32.211 Identify (06h): Supported 00:07:32.211 Abort (08h): Supported 00:07:32.211 Set Features (09h): Supported 00:07:32.211 Get Features (0Ah): Supported 00:07:32.211 Asynchronous Event Request (0Ch): Supported 00:07:32.211 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:32.211 Directive Send (19h): Supported 00:07:32.211 Directive Receive (1Ah): Supported 00:07:32.211 Virtualization Management (1Ch): Supported 00:07:32.211 Doorbell Buffer Config (7Ch): Supported 00:07:32.211 Format NVM (80h): Supported LBA-Change 00:07:32.211 I/O Commands 00:07:32.211 ------------ 00:07:32.211 Flush (00h): Supported LBA-Change 00:07:32.211 Write (01h): Supported LBA-Change 00:07:32.211 Read (02h): Supported 00:07:32.211 Compare (05h): Supported 00:07:32.211 Write Zeroes (08h): Supported LBA-Change 00:07:32.211 Dataset Management (09h): Supported LBA-Change 00:07:32.211 Unknown (0Ch): Supported 00:07:32.211 Unknown (12h): Supported 00:07:32.211 Copy (19h): Supported LBA-Change 00:07:32.211 Unknown (1Dh): Supported LBA-Change 00:07:32.211 00:07:32.211 Error Log 00:07:32.211 ========= 00:07:32.211 00:07:32.211 Arbitration 00:07:32.211 =========== 00:07:32.211 Arbitration Burst: no limit 00:07:32.211 00:07:32.211 Power Management 00:07:32.211 ================ 00:07:32.211 Number of Power States: 1 00:07:32.211 Current Power State: Power State #0 00:07:32.211 Power State #0: 00:07:32.211 Max Power: 25.00 W 00:07:32.211 Non-Operational State: Operational 00:07:32.211 Entry Latency: 16 microseconds 00:07:32.211 Exit Latency: 4 microseconds 00:07:32.211 Relative Read Throughput: 0 00:07:32.211 Relative Read Latency: 0 00:07:32.211 Relative Write Throughput: 0 00:07:32.211 Relative Write Latency: 0 00:07:32.211 Idle Power: Not Reported 00:07:32.211 Active Power: Not Reported 00:07:32.211 Non-Operational Permissive Mode: Not Supported 00:07:32.211 00:07:32.211 Health Information 00:07:32.211 ================== 00:07:32.211 Critical Warnings: 00:07:32.211 Available Spare Space: OK 00:07:32.211 Temperature: OK 00:07:32.211 Device Reliability: OK 00:07:32.211 Read Only: No 00:07:32.211 Volatile Memory Backup: OK 00:07:32.211 Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.211 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:32.211 Available Spare: 0% 00:07:32.211 Available Spare Threshold: 0% 00:07:32.211 Life Percentage Used: 0% 00:07:32.211 Data Units Read: 698 00:07:32.211 Data Units Written: 626 00:07:32.211 Host Read Commands: 36852 00:07:32.211 Host Write Commands: 36638 00:07:32.211 Controller Busy Time: 0 minutes 00:07:32.211 Power Cycles: 0 00:07:32.211 Power On Hours: 0 hours 00:07:32.211 Unsafe Shutdowns: 0 00:07:32.211 Unrecoverable Media Errors: 0 00:07:32.211 Lifetime Error Log Entries: 0 00:07:32.211 Warning Temperature Time: 0 minutes 00:07:32.211 Critical Temperature Time: 0 minutes 00:07:32.211 00:07:32.211 Number of Queues 00:07:32.211 ================ 00:07:32.211 Number of I/O Submission Queues: 64 00:07:32.211 Number of I/O Completion Queues: 64 00:07:32.211 00:07:32.211 ZNS Specific Controller Data 00:07:32.211 ============================ 00:07:32.211 Zone Append Size Limit: 0 00:07:32.211 00:07:32.211 00:07:32.211 Active Namespaces 00:07:32.211 ================= 00:07:32.211 Namespace ID:1 00:07:32.211 Error Recovery Timeout: Unlimited 00:07:32.211 Command Set Identifier: NVM (00h) 00:07:32.211 Deallocate: Supported 00:07:32.211 Deallocated/Unwritten Error: Supported 00:07:32.211 Deallocated Read Value: All 0x00 00:07:32.211 Deallocate in Write Zeroes: Not Supported 00:07:32.211 Deallocated Guard Field: 0xFFFF 00:07:32.211 Flush: Supported 00:07:32.211 Reservation: Not Supported 00:07:32.211 Metadata Transferred as: Separate Metadata Buffer 00:07:32.211 Namespace Sharing Capabilities: Private 00:07:32.211 Size (in LBAs): 1548666 (5GiB) 00:07:32.211 Capacity (in LBAs): 1548666 (5GiB) 00:07:32.211 Utilization (in LBAs): 1548666 (5GiB) 00:07:32.211 Thin Provisioning: Not Supported 00:07:32.211 Per-NS Atomic Units: No 00:07:32.211 Maximum Single Source Range Length: 128 00:07:32.211 Maximum Copy Length: 128 00:07:32.211 Maximum Source Range Count: 128 00:07:32.211 NGUID/EUI64 Never Reused: No 00:07:32.211 Namespace Write Protected: No 00:07:32.211 Number of LBA Formats: 8 00:07:32.211 Current LBA Format: LBA Format #07 00:07:32.211 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.211 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.211 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.211 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.211 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.211 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.211 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.211 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.211 00:07:32.211 NVM Specific Namespace Data 00:07:32.211 =========================== 00:07:32.211 Logical Block Storage Tag Mask: 0 00:07:32.211 Protection Information Capabilities: 00:07:32.211 16b Guard Protection Information Storage Tag Support: No 00:07:32.211 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.211 Storage Tag Check Read Support: No 00:07:32.211 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.211 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.211 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.211 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.211 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.211 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.211 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.211 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.211 12:19:39 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:32.211 12:19:39 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:32.470 ===================================================== 00:07:32.471 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:32.471 ===================================================== 00:07:32.471 Controller Capabilities/Features 00:07:32.471 ================================ 00:07:32.471 Vendor ID: 1b36 00:07:32.471 Subsystem Vendor ID: 1af4 00:07:32.471 Serial Number: 12341 00:07:32.471 Model Number: QEMU NVMe Ctrl 00:07:32.471 Firmware Version: 8.0.0 00:07:32.471 Recommended Arb Burst: 6 00:07:32.471 IEEE OUI Identifier: 00 54 52 00:07:32.471 Multi-path I/O 00:07:32.471 May have multiple subsystem ports: No 00:07:32.471 May have multiple controllers: No 00:07:32.471 Associated with SR-IOV VF: No 00:07:32.471 Max Data Transfer Size: 524288 00:07:32.471 Max Number of Namespaces: 256 00:07:32.471 Max Number of I/O Queues: 64 00:07:32.471 NVMe Specification Version (VS): 1.4 00:07:32.471 NVMe Specification Version (Identify): 1.4 00:07:32.471 Maximum Queue Entries: 2048 00:07:32.471 Contiguous Queues Required: Yes 00:07:32.471 Arbitration Mechanisms Supported 00:07:32.471 Weighted Round Robin: Not Supported 00:07:32.471 Vendor Specific: Not Supported 00:07:32.471 Reset Timeout: 7500 ms 00:07:32.471 Doorbell Stride: 4 bytes 00:07:32.471 NVM Subsystem Reset: Not Supported 00:07:32.471 Command Sets Supported 00:07:32.471 NVM Command Set: Supported 00:07:32.471 Boot Partition: Not Supported 00:07:32.471 Memory Page Size Minimum: 4096 bytes 00:07:32.471 Memory Page Size Maximum: 65536 bytes 00:07:32.471 Persistent Memory Region: Not Supported 00:07:32.471 Optional Asynchronous Events Supported 00:07:32.471 Namespace Attribute Notices: Supported 00:07:32.471 Firmware Activation Notices: Not Supported 00:07:32.471 ANA Change Notices: Not Supported 00:07:32.471 PLE Aggregate Log Change Notices: Not Supported 00:07:32.471 LBA Status Info Alert Notices: Not Supported 00:07:32.471 EGE Aggregate Log Change Notices: Not Supported 00:07:32.471 Normal NVM Subsystem Shutdown event: Not Supported 00:07:32.471 Zone Descriptor Change Notices: Not Supported 00:07:32.471 Discovery Log Change Notices: Not Supported 00:07:32.471 Controller Attributes 00:07:32.471 128-bit Host Identifier: Not Supported 00:07:32.471 Non-Operational Permissive Mode: Not Supported 00:07:32.471 NVM Sets: Not Supported 00:07:32.471 Read Recovery Levels: Not Supported 00:07:32.471 Endurance Groups: Not Supported 00:07:32.471 Predictable Latency Mode: Not Supported 00:07:32.471 Traffic Based Keep ALive: Not Supported 00:07:32.471 Namespace Granularity: Not Supported 00:07:32.471 SQ Associations: Not Supported 00:07:32.471 UUID List: Not Supported 00:07:32.471 Multi-Domain Subsystem: Not Supported 00:07:32.471 Fixed Capacity Management: Not Supported 00:07:32.471 Variable Capacity Management: Not Supported 00:07:32.471 Delete Endurance Group: Not Supported 00:07:32.471 Delete NVM Set: Not Supported 00:07:32.471 Extended LBA Formats Supported: Supported 00:07:32.471 Flexible Data Placement Supported: Not Supported 00:07:32.471 00:07:32.471 Controller Memory Buffer Support 00:07:32.471 ================================ 00:07:32.471 Supported: No 00:07:32.471 00:07:32.471 Persistent Memory Region Support 00:07:32.471 ================================ 00:07:32.471 Supported: No 00:07:32.471 00:07:32.471 Admin Command Set Attributes 00:07:32.471 ============================ 00:07:32.471 Security Send/Receive: Not Supported 00:07:32.471 Format NVM: Supported 00:07:32.471 Firmware Activate/Download: Not Supported 00:07:32.471 Namespace Management: Supported 00:07:32.471 Device Self-Test: Not Supported 00:07:32.471 Directives: Supported 00:07:32.471 NVMe-MI: Not Supported 00:07:32.471 Virtualization Management: Not Supported 00:07:32.471 Doorbell Buffer Config: Supported 00:07:32.471 Get LBA Status Capability: Not Supported 00:07:32.471 Command & Feature Lockdown Capability: Not Supported 00:07:32.471 Abort Command Limit: 4 00:07:32.471 Async Event Request Limit: 4 00:07:32.471 Number of Firmware Slots: N/A 00:07:32.471 Firmware Slot 1 Read-Only: N/A 00:07:32.471 Firmware Activation Without Reset: N/A 00:07:32.471 Multiple Update Detection Support: N/A 00:07:32.471 Firmware Update Granularity: No Information Provided 00:07:32.471 Per-Namespace SMART Log: Yes 00:07:32.471 Asymmetric Namespace Access Log Page: Not Supported 00:07:32.471 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:32.471 Command Effects Log Page: Supported 00:07:32.471 Get Log Page Extended Data: Supported 00:07:32.471 Telemetry Log Pages: Not Supported 00:07:32.471 Persistent Event Log Pages: Not Supported 00:07:32.471 Supported Log Pages Log Page: May Support 00:07:32.471 Commands Supported & Effects Log Page: Not Supported 00:07:32.471 Feature Identifiers & Effects Log Page:May Support 00:07:32.471 NVMe-MI Commands & Effects Log Page: May Support 00:07:32.471 Data Area 4 for Telemetry Log: Not Supported 00:07:32.471 Error Log Page Entries Supported: 1 00:07:32.471 Keep Alive: Not Supported 00:07:32.471 00:07:32.471 NVM Command Set Attributes 00:07:32.471 ========================== 00:07:32.471 Submission Queue Entry Size 00:07:32.471 Max: 64 00:07:32.471 Min: 64 00:07:32.471 Completion Queue Entry Size 00:07:32.471 Max: 16 00:07:32.471 Min: 16 00:07:32.471 Number of Namespaces: 256 00:07:32.471 Compare Command: Supported 00:07:32.471 Write Uncorrectable Command: Not Supported 00:07:32.471 Dataset Management Command: Supported 00:07:32.471 Write Zeroes Command: Supported 00:07:32.471 Set Features Save Field: Supported 00:07:32.471 Reservations: Not Supported 00:07:32.471 Timestamp: Supported 00:07:32.471 Copy: Supported 00:07:32.471 Volatile Write Cache: Present 00:07:32.471 Atomic Write Unit (Normal): 1 00:07:32.471 Atomic Write Unit (PFail): 1 00:07:32.471 Atomic Compare & Write Unit: 1 00:07:32.471 Fused Compare & Write: Not Supported 00:07:32.471 Scatter-Gather List 00:07:32.471 SGL Command Set: Supported 00:07:32.471 SGL Keyed: Not Supported 00:07:32.471 SGL Bit Bucket Descriptor: Not Supported 00:07:32.471 SGL Metadata Pointer: Not Supported 00:07:32.471 Oversized SGL: Not Supported 00:07:32.471 SGL Metadata Address: Not Supported 00:07:32.471 SGL Offset: Not Supported 00:07:32.471 Transport SGL Data Block: Not Supported 00:07:32.471 Replay Protected Memory Block: Not Supported 00:07:32.471 00:07:32.471 Firmware Slot Information 00:07:32.471 ========================= 00:07:32.471 Active slot: 1 00:07:32.471 Slot 1 Firmware Revision: 1.0 00:07:32.471 00:07:32.471 00:07:32.471 Commands Supported and Effects 00:07:32.471 ============================== 00:07:32.471 Admin Commands 00:07:32.471 -------------- 00:07:32.471 Delete I/O Submission Queue (00h): Supported 00:07:32.471 Create I/O Submission Queue (01h): Supported 00:07:32.471 Get Log Page (02h): Supported 00:07:32.471 Delete I/O Completion Queue (04h): Supported 00:07:32.471 Create I/O Completion Queue (05h): Supported 00:07:32.471 Identify (06h): Supported 00:07:32.471 Abort (08h): Supported 00:07:32.471 Set Features (09h): Supported 00:07:32.471 Get Features (0Ah): Supported 00:07:32.471 Asynchronous Event Request (0Ch): Supported 00:07:32.471 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:32.471 Directive Send (19h): Supported 00:07:32.471 Directive Receive (1Ah): Supported 00:07:32.471 Virtualization Management (1Ch): Supported 00:07:32.471 Doorbell Buffer Config (7Ch): Supported 00:07:32.471 Format NVM (80h): Supported LBA-Change 00:07:32.471 I/O Commands 00:07:32.471 ------------ 00:07:32.471 Flush (00h): Supported LBA-Change 00:07:32.471 Write (01h): Supported LBA-Change 00:07:32.471 Read (02h): Supported 00:07:32.471 Compare (05h): Supported 00:07:32.471 Write Zeroes (08h): Supported LBA-Change 00:07:32.471 Dataset Management (09h): Supported LBA-Change 00:07:32.471 Unknown (0Ch): Supported 00:07:32.472 Unknown (12h): Supported 00:07:32.472 Copy (19h): Supported LBA-Change 00:07:32.472 Unknown (1Dh): Supported LBA-Change 00:07:32.472 00:07:32.472 Error Log 00:07:32.472 ========= 00:07:32.472 00:07:32.472 Arbitration 00:07:32.472 =========== 00:07:32.472 Arbitration Burst: no limit 00:07:32.472 00:07:32.472 Power Management 00:07:32.472 ================ 00:07:32.472 Number of Power States: 1 00:07:32.472 Current Power State: Power State #0 00:07:32.472 Power State #0: 00:07:32.472 Max Power: 25.00 W 00:07:32.472 Non-Operational State: Operational 00:07:32.472 Entry Latency: 16 microseconds 00:07:32.472 Exit Latency: 4 microseconds 00:07:32.472 Relative Read Throughput: 0 00:07:32.472 Relative Read Latency: 0 00:07:32.472 Relative Write Throughput: 0 00:07:32.472 Relative Write Latency: 0 00:07:32.472 Idle Power: Not Reported 00:07:32.472 Active Power: Not Reported 00:07:32.472 Non-Operational Permissive Mode: Not Supported 00:07:32.472 00:07:32.472 Health Information 00:07:32.472 ================== 00:07:32.472 Critical Warnings: 00:07:32.472 Available Spare Space: OK 00:07:32.472 Temperature: OK 00:07:32.472 Device Reliability: OK 00:07:32.472 Read Only: No 00:07:32.472 Volatile Memory Backup: OK 00:07:32.472 Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.472 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:32.472 Available Spare: 0% 00:07:32.472 Available Spare Threshold: 0% 00:07:32.472 Life Percentage Used: 0% 00:07:32.472 Data Units Read: 1094 00:07:32.472 Data Units Written: 961 00:07:32.472 Host Read Commands: 55772 00:07:32.472 Host Write Commands: 54555 00:07:32.472 Controller Busy Time: 0 minutes 00:07:32.472 Power Cycles: 0 00:07:32.472 Power On Hours: 0 hours 00:07:32.472 Unsafe Shutdowns: 0 00:07:32.472 Unrecoverable Media Errors: 0 00:07:32.472 Lifetime Error Log Entries: 0 00:07:32.472 Warning Temperature Time: 0 minutes 00:07:32.472 Critical Temperature Time: 0 minutes 00:07:32.472 00:07:32.472 Number of Queues 00:07:32.472 ================ 00:07:32.472 Number of I/O Submission Queues: 64 00:07:32.472 Number of I/O Completion Queues: 64 00:07:32.472 00:07:32.472 ZNS Specific Controller Data 00:07:32.472 ============================ 00:07:32.472 Zone Append Size Limit: 0 00:07:32.472 00:07:32.472 00:07:32.472 Active Namespaces 00:07:32.472 ================= 00:07:32.472 Namespace ID:1 00:07:32.472 Error Recovery Timeout: Unlimited 00:07:32.472 Command Set Identifier: NVM (00h) 00:07:32.472 Deallocate: Supported 00:07:32.472 Deallocated/Unwritten Error: Supported 00:07:32.472 Deallocated Read Value: All 0x00 00:07:32.472 Deallocate in Write Zeroes: Not Supported 00:07:32.472 Deallocated Guard Field: 0xFFFF 00:07:32.472 Flush: Supported 00:07:32.472 Reservation: Not Supported 00:07:32.472 Namespace Sharing Capabilities: Private 00:07:32.472 Size (in LBAs): 1310720 (5GiB) 00:07:32.472 Capacity (in LBAs): 1310720 (5GiB) 00:07:32.472 Utilization (in LBAs): 1310720 (5GiB) 00:07:32.472 Thin Provisioning: Not Supported 00:07:32.472 Per-NS Atomic Units: No 00:07:32.472 Maximum Single Source Range Length: 128 00:07:32.472 Maximum Copy Length: 128 00:07:32.472 Maximum Source Range Count: 128 00:07:32.472 NGUID/EUI64 Never Reused: No 00:07:32.472 Namespace Write Protected: No 00:07:32.472 Number of LBA Formats: 8 00:07:32.472 Current LBA Format: LBA Format #04 00:07:32.472 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.472 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.472 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.472 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.472 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.472 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.472 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.472 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.472 00:07:32.472 NVM Specific Namespace Data 00:07:32.472 =========================== 00:07:32.472 Logical Block Storage Tag Mask: 0 00:07:32.472 Protection Information Capabilities: 00:07:32.472 16b Guard Protection Information Storage Tag Support: No 00:07:32.472 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.472 Storage Tag Check Read Support: No 00:07:32.472 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.472 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.472 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.472 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.472 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.472 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.472 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.472 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.472 12:19:39 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:32.472 12:19:39 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:32.732 ===================================================== 00:07:32.732 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:32.732 ===================================================== 00:07:32.732 Controller Capabilities/Features 00:07:32.732 ================================ 00:07:32.732 Vendor ID: 1b36 00:07:32.732 Subsystem Vendor ID: 1af4 00:07:32.732 Serial Number: 12342 00:07:32.732 Model Number: QEMU NVMe Ctrl 00:07:32.732 Firmware Version: 8.0.0 00:07:32.732 Recommended Arb Burst: 6 00:07:32.732 IEEE OUI Identifier: 00 54 52 00:07:32.732 Multi-path I/O 00:07:32.732 May have multiple subsystem ports: No 00:07:32.732 May have multiple controllers: No 00:07:32.732 Associated with SR-IOV VF: No 00:07:32.732 Max Data Transfer Size: 524288 00:07:32.732 Max Number of Namespaces: 256 00:07:32.732 Max Number of I/O Queues: 64 00:07:32.732 NVMe Specification Version (VS): 1.4 00:07:32.732 NVMe Specification Version (Identify): 1.4 00:07:32.732 Maximum Queue Entries: 2048 00:07:32.732 Contiguous Queues Required: Yes 00:07:32.732 Arbitration Mechanisms Supported 00:07:32.732 Weighted Round Robin: Not Supported 00:07:32.732 Vendor Specific: Not Supported 00:07:32.732 Reset Timeout: 7500 ms 00:07:32.732 Doorbell Stride: 4 bytes 00:07:32.732 NVM Subsystem Reset: Not Supported 00:07:32.732 Command Sets Supported 00:07:32.732 NVM Command Set: Supported 00:07:32.732 Boot Partition: Not Supported 00:07:32.732 Memory Page Size Minimum: 4096 bytes 00:07:32.732 Memory Page Size Maximum: 65536 bytes 00:07:32.732 Persistent Memory Region: Not Supported 00:07:32.732 Optional Asynchronous Events Supported 00:07:32.732 Namespace Attribute Notices: Supported 00:07:32.732 Firmware Activation Notices: Not Supported 00:07:32.732 ANA Change Notices: Not Supported 00:07:32.732 PLE Aggregate Log Change Notices: Not Supported 00:07:32.732 LBA Status Info Alert Notices: Not Supported 00:07:32.732 EGE Aggregate Log Change Notices: Not Supported 00:07:32.732 Normal NVM Subsystem Shutdown event: Not Supported 00:07:32.732 Zone Descriptor Change Notices: Not Supported 00:07:32.732 Discovery Log Change Notices: Not Supported 00:07:32.732 Controller Attributes 00:07:32.732 128-bit Host Identifier: Not Supported 00:07:32.732 Non-Operational Permissive Mode: Not Supported 00:07:32.732 NVM Sets: Not Supported 00:07:32.732 Read Recovery Levels: Not Supported 00:07:32.732 Endurance Groups: Not Supported 00:07:32.732 Predictable Latency Mode: Not Supported 00:07:32.732 Traffic Based Keep ALive: Not Supported 00:07:32.732 Namespace Granularity: Not Supported 00:07:32.732 SQ Associations: Not Supported 00:07:32.732 UUID List: Not Supported 00:07:32.732 Multi-Domain Subsystem: Not Supported 00:07:32.732 Fixed Capacity Management: Not Supported 00:07:32.732 Variable Capacity Management: Not Supported 00:07:32.732 Delete Endurance Group: Not Supported 00:07:32.732 Delete NVM Set: Not Supported 00:07:32.732 Extended LBA Formats Supported: Supported 00:07:32.732 Flexible Data Placement Supported: Not Supported 00:07:32.732 00:07:32.732 Controller Memory Buffer Support 00:07:32.732 ================================ 00:07:32.732 Supported: No 00:07:32.732 00:07:32.732 Persistent Memory Region Support 00:07:32.732 ================================ 00:07:32.732 Supported: No 00:07:32.732 00:07:32.732 Admin Command Set Attributes 00:07:32.732 ============================ 00:07:32.732 Security Send/Receive: Not Supported 00:07:32.732 Format NVM: Supported 00:07:32.732 Firmware Activate/Download: Not Supported 00:07:32.732 Namespace Management: Supported 00:07:32.732 Device Self-Test: Not Supported 00:07:32.732 Directives: Supported 00:07:32.732 NVMe-MI: Not Supported 00:07:32.732 Virtualization Management: Not Supported 00:07:32.732 Doorbell Buffer Config: Supported 00:07:32.732 Get LBA Status Capability: Not Supported 00:07:32.732 Command & Feature Lockdown Capability: Not Supported 00:07:32.732 Abort Command Limit: 4 00:07:32.732 Async Event Request Limit: 4 00:07:32.732 Number of Firmware Slots: N/A 00:07:32.732 Firmware Slot 1 Read-Only: N/A 00:07:32.732 Firmware Activation Without Reset: N/A 00:07:32.732 Multiple Update Detection Support: N/A 00:07:32.732 Firmware Update Granularity: No Information Provided 00:07:32.732 Per-Namespace SMART Log: Yes 00:07:32.732 Asymmetric Namespace Access Log Page: Not Supported 00:07:32.732 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:32.732 Command Effects Log Page: Supported 00:07:32.732 Get Log Page Extended Data: Supported 00:07:32.732 Telemetry Log Pages: Not Supported 00:07:32.732 Persistent Event Log Pages: Not Supported 00:07:32.732 Supported Log Pages Log Page: May Support 00:07:32.732 Commands Supported & Effects Log Page: Not Supported 00:07:32.732 Feature Identifiers & Effects Log Page:May Support 00:07:32.732 NVMe-MI Commands & Effects Log Page: May Support 00:07:32.732 Data Area 4 for Telemetry Log: Not Supported 00:07:32.732 Error Log Page Entries Supported: 1 00:07:32.732 Keep Alive: Not Supported 00:07:32.732 00:07:32.732 NVM Command Set Attributes 00:07:32.732 ========================== 00:07:32.732 Submission Queue Entry Size 00:07:32.732 Max: 64 00:07:32.732 Min: 64 00:07:32.732 Completion Queue Entry Size 00:07:32.732 Max: 16 00:07:32.732 Min: 16 00:07:32.732 Number of Namespaces: 256 00:07:32.732 Compare Command: Supported 00:07:32.732 Write Uncorrectable Command: Not Supported 00:07:32.732 Dataset Management Command: Supported 00:07:32.732 Write Zeroes Command: Supported 00:07:32.732 Set Features Save Field: Supported 00:07:32.732 Reservations: Not Supported 00:07:32.732 Timestamp: Supported 00:07:32.732 Copy: Supported 00:07:32.732 Volatile Write Cache: Present 00:07:32.732 Atomic Write Unit (Normal): 1 00:07:32.732 Atomic Write Unit (PFail): 1 00:07:32.732 Atomic Compare & Write Unit: 1 00:07:32.732 Fused Compare & Write: Not Supported 00:07:32.732 Scatter-Gather List 00:07:32.732 SGL Command Set: Supported 00:07:32.732 SGL Keyed: Not Supported 00:07:32.732 SGL Bit Bucket Descriptor: Not Supported 00:07:32.732 SGL Metadata Pointer: Not Supported 00:07:32.732 Oversized SGL: Not Supported 00:07:32.732 SGL Metadata Address: Not Supported 00:07:32.732 SGL Offset: Not Supported 00:07:32.732 Transport SGL Data Block: Not Supported 00:07:32.732 Replay Protected Memory Block: Not Supported 00:07:32.732 00:07:32.732 Firmware Slot Information 00:07:32.732 ========================= 00:07:32.732 Active slot: 1 00:07:32.732 Slot 1 Firmware Revision: 1.0 00:07:32.732 00:07:32.732 00:07:32.732 Commands Supported and Effects 00:07:32.732 ============================== 00:07:32.732 Admin Commands 00:07:32.732 -------------- 00:07:32.732 Delete I/O Submission Queue (00h): Supported 00:07:32.732 Create I/O Submission Queue (01h): Supported 00:07:32.732 Get Log Page (02h): Supported 00:07:32.732 Delete I/O Completion Queue (04h): Supported 00:07:32.732 Create I/O Completion Queue (05h): Supported 00:07:32.732 Identify (06h): Supported 00:07:32.732 Abort (08h): Supported 00:07:32.732 Set Features (09h): Supported 00:07:32.732 Get Features (0Ah): Supported 00:07:32.732 Asynchronous Event Request (0Ch): Supported 00:07:32.732 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:32.732 Directive Send (19h): Supported 00:07:32.732 Directive Receive (1Ah): Supported 00:07:32.732 Virtualization Management (1Ch): Supported 00:07:32.732 Doorbell Buffer Config (7Ch): Supported 00:07:32.733 Format NVM (80h): Supported LBA-Change 00:07:32.733 I/O Commands 00:07:32.733 ------------ 00:07:32.733 Flush (00h): Supported LBA-Change 00:07:32.733 Write (01h): Supported LBA-Change 00:07:32.733 Read (02h): Supported 00:07:32.733 Compare (05h): Supported 00:07:32.733 Write Zeroes (08h): Supported LBA-Change 00:07:32.733 Dataset Management (09h): Supported LBA-Change 00:07:32.733 Unknown (0Ch): Supported 00:07:32.733 Unknown (12h): Supported 00:07:32.733 Copy (19h): Supported LBA-Change 00:07:32.733 Unknown (1Dh): Supported LBA-Change 00:07:32.733 00:07:32.733 Error Log 00:07:32.733 ========= 00:07:32.733 00:07:32.733 Arbitration 00:07:32.733 =========== 00:07:32.733 Arbitration Burst: no limit 00:07:32.733 00:07:32.733 Power Management 00:07:32.733 ================ 00:07:32.733 Number of Power States: 1 00:07:32.733 Current Power State: Power State #0 00:07:32.733 Power State #0: 00:07:32.733 Max Power: 25.00 W 00:07:32.733 Non-Operational State: Operational 00:07:32.733 Entry Latency: 16 microseconds 00:07:32.733 Exit Latency: 4 microseconds 00:07:32.733 Relative Read Throughput: 0 00:07:32.733 Relative Read Latency: 0 00:07:32.733 Relative Write Throughput: 0 00:07:32.733 Relative Write Latency: 0 00:07:32.733 Idle Power: Not Reported 00:07:32.733 Active Power: Not Reported 00:07:32.733 Non-Operational Permissive Mode: Not Supported 00:07:32.733 00:07:32.733 Health Information 00:07:32.733 ================== 00:07:32.733 Critical Warnings: 00:07:32.733 Available Spare Space: OK 00:07:32.733 Temperature: OK 00:07:32.733 Device Reliability: OK 00:07:32.733 Read Only: No 00:07:32.733 Volatile Memory Backup: OK 00:07:32.733 Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.733 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:32.733 Available Spare: 0% 00:07:32.733 Available Spare Threshold: 0% 00:07:32.733 Life Percentage Used: 0% 00:07:32.733 Data Units Read: 2220 00:07:32.733 Data Units Written: 2007 00:07:32.733 Host Read Commands: 111904 00:07:32.733 Host Write Commands: 110173 00:07:32.733 Controller Busy Time: 0 minutes 00:07:32.733 Power Cycles: 0 00:07:32.733 Power On Hours: 0 hours 00:07:32.733 Unsafe Shutdowns: 0 00:07:32.733 Unrecoverable Media Errors: 0 00:07:32.733 Lifetime Error Log Entries: 0 00:07:32.733 Warning Temperature Time: 0 minutes 00:07:32.733 Critical Temperature Time: 0 minutes 00:07:32.733 00:07:32.733 Number of Queues 00:07:32.733 ================ 00:07:32.733 Number of I/O Submission Queues: 64 00:07:32.733 Number of I/O Completion Queues: 64 00:07:32.733 00:07:32.733 ZNS Specific Controller Data 00:07:32.733 ============================ 00:07:32.733 Zone Append Size Limit: 0 00:07:32.733 00:07:32.733 00:07:32.733 Active Namespaces 00:07:32.733 ================= 00:07:32.733 Namespace ID:1 00:07:32.733 Error Recovery Timeout: Unlimited 00:07:32.733 Command Set Identifier: NVM (00h) 00:07:32.733 Deallocate: Supported 00:07:32.733 Deallocated/Unwritten Error: Supported 00:07:32.733 Deallocated Read Value: All 0x00 00:07:32.733 Deallocate in Write Zeroes: Not Supported 00:07:32.733 Deallocated Guard Field: 0xFFFF 00:07:32.733 Flush: Supported 00:07:32.733 Reservation: Not Supported 00:07:32.733 Namespace Sharing Capabilities: Private 00:07:32.733 Size (in LBAs): 1048576 (4GiB) 00:07:32.733 Capacity (in LBAs): 1048576 (4GiB) 00:07:32.733 Utilization (in LBAs): 1048576 (4GiB) 00:07:32.733 Thin Provisioning: Not Supported 00:07:32.733 Per-NS Atomic Units: No 00:07:32.733 Maximum Single Source Range Length: 128 00:07:32.733 Maximum Copy Length: 128 00:07:32.733 Maximum Source Range Count: 128 00:07:32.733 NGUID/EUI64 Never Reused: No 00:07:32.733 Namespace Write Protected: No 00:07:32.733 Number of LBA Formats: 8 00:07:32.733 Current LBA Format: LBA Format #04 00:07:32.733 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.733 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.733 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.733 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.733 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.733 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.733 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.733 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.733 00:07:32.733 NVM Specific Namespace Data 00:07:32.733 =========================== 00:07:32.733 Logical Block Storage Tag Mask: 0 00:07:32.733 Protection Information Capabilities: 00:07:32.733 16b Guard Protection Information Storage Tag Support: No 00:07:32.733 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.733 Storage Tag Check Read Support: No 00:07:32.733 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.733 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.733 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.733 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.733 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.733 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.733 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.733 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.733 Namespace ID:2 00:07:32.733 Error Recovery Timeout: Unlimited 00:07:32.733 Command Set Identifier: NVM (00h) 00:07:32.733 Deallocate: Supported 00:07:32.733 Deallocated/Unwritten Error: Supported 00:07:32.733 Deallocated Read Value: All 0x00 00:07:32.733 Deallocate in Write Zeroes: Not Supported 00:07:32.733 Deallocated Guard Field: 0xFFFF 00:07:32.733 Flush: Supported 00:07:32.733 Reservation: Not Supported 00:07:32.733 Namespace Sharing Capabilities: Private 00:07:32.733 Size (in LBAs): 1048576 (4GiB) 00:07:32.733 Capacity (in LBAs): 1048576 (4GiB) 00:07:32.733 Utilization (in LBAs): 1048576 (4GiB) 00:07:32.733 Thin Provisioning: Not Supported 00:07:32.733 Per-NS Atomic Units: No 00:07:32.733 Maximum Single Source Range Length: 128 00:07:32.733 Maximum Copy Length: 128 00:07:32.733 Maximum Source Range Count: 128 00:07:32.733 NGUID/EUI64 Never Reused: No 00:07:32.733 Namespace Write Protected: No 00:07:32.733 Number of LBA Formats: 8 00:07:32.733 Current LBA Format: LBA Format #04 00:07:32.733 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.733 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.733 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.733 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.733 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.733 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.733 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.733 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.733 00:07:32.733 NVM Specific Namespace Data 00:07:32.733 =========================== 00:07:32.733 Logical Block Storage Tag Mask: 0 00:07:32.733 Protection Information Capabilities: 00:07:32.733 16b Guard Protection Information Storage Tag Support: No 00:07:32.733 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.733 Storage Tag Check Read Support: No 00:07:32.733 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.733 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.733 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.733 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.733 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.733 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.733 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.733 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.733 Namespace ID:3 00:07:32.733 Error Recovery Timeout: Unlimited 00:07:32.733 Command Set Identifier: NVM (00h) 00:07:32.733 Deallocate: Supported 00:07:32.733 Deallocated/Unwritten Error: Supported 00:07:32.733 Deallocated Read Value: All 0x00 00:07:32.733 Deallocate in Write Zeroes: Not Supported 00:07:32.733 Deallocated Guard Field: 0xFFFF 00:07:32.733 Flush: Supported 00:07:32.733 Reservation: Not Supported 00:07:32.733 Namespace Sharing Capabilities: Private 00:07:32.733 Size (in LBAs): 1048576 (4GiB) 00:07:32.733 Capacity (in LBAs): 1048576 (4GiB) 00:07:32.733 Utilization (in LBAs): 1048576 (4GiB) 00:07:32.733 Thin Provisioning: Not Supported 00:07:32.733 Per-NS Atomic Units: No 00:07:32.733 Maximum Single Source Range Length: 128 00:07:32.733 Maximum Copy Length: 128 00:07:32.733 Maximum Source Range Count: 128 00:07:32.733 NGUID/EUI64 Never Reused: No 00:07:32.733 Namespace Write Protected: No 00:07:32.733 Number of LBA Formats: 8 00:07:32.733 Current LBA Format: LBA Format #04 00:07:32.733 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.733 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.733 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.734 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.734 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.734 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.734 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.734 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.734 00:07:32.734 NVM Specific Namespace Data 00:07:32.734 =========================== 00:07:32.734 Logical Block Storage Tag Mask: 0 00:07:32.734 Protection Information Capabilities: 00:07:32.734 16b Guard Protection Information Storage Tag Support: No 00:07:32.734 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.734 Storage Tag Check Read Support: No 00:07:32.734 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.734 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.734 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.734 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.734 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.734 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.734 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.734 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.734 12:19:39 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:32.734 12:19:39 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:32.993 ===================================================== 00:07:32.993 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:32.993 ===================================================== 00:07:32.993 Controller Capabilities/Features 00:07:32.993 ================================ 00:07:32.993 Vendor ID: 1b36 00:07:32.993 Subsystem Vendor ID: 1af4 00:07:32.993 Serial Number: 12343 00:07:32.993 Model Number: QEMU NVMe Ctrl 00:07:32.993 Firmware Version: 8.0.0 00:07:32.993 Recommended Arb Burst: 6 00:07:32.993 IEEE OUI Identifier: 00 54 52 00:07:32.993 Multi-path I/O 00:07:32.993 May have multiple subsystem ports: No 00:07:32.993 May have multiple controllers: Yes 00:07:32.993 Associated with SR-IOV VF: No 00:07:32.993 Max Data Transfer Size: 524288 00:07:32.993 Max Number of Namespaces: 256 00:07:32.993 Max Number of I/O Queues: 64 00:07:32.993 NVMe Specification Version (VS): 1.4 00:07:32.993 NVMe Specification Version (Identify): 1.4 00:07:32.993 Maximum Queue Entries: 2048 00:07:32.993 Contiguous Queues Required: Yes 00:07:32.993 Arbitration Mechanisms Supported 00:07:32.993 Weighted Round Robin: Not Supported 00:07:32.993 Vendor Specific: Not Supported 00:07:32.993 Reset Timeout: 7500 ms 00:07:32.993 Doorbell Stride: 4 bytes 00:07:32.993 NVM Subsystem Reset: Not Supported 00:07:32.993 Command Sets Supported 00:07:32.993 NVM Command Set: Supported 00:07:32.993 Boot Partition: Not Supported 00:07:32.993 Memory Page Size Minimum: 4096 bytes 00:07:32.993 Memory Page Size Maximum: 65536 bytes 00:07:32.993 Persistent Memory Region: Not Supported 00:07:32.993 Optional Asynchronous Events Supported 00:07:32.993 Namespace Attribute Notices: Supported 00:07:32.993 Firmware Activation Notices: Not Supported 00:07:32.993 ANA Change Notices: Not Supported 00:07:32.993 PLE Aggregate Log Change Notices: Not Supported 00:07:32.993 LBA Status Info Alert Notices: Not Supported 00:07:32.993 EGE Aggregate Log Change Notices: Not Supported 00:07:32.993 Normal NVM Subsystem Shutdown event: Not Supported 00:07:32.993 Zone Descriptor Change Notices: Not Supported 00:07:32.993 Discovery Log Change Notices: Not Supported 00:07:32.993 Controller Attributes 00:07:32.993 128-bit Host Identifier: Not Supported 00:07:32.993 Non-Operational Permissive Mode: Not Supported 00:07:32.993 NVM Sets: Not Supported 00:07:32.993 Read Recovery Levels: Not Supported 00:07:32.993 Endurance Groups: Supported 00:07:32.993 Predictable Latency Mode: Not Supported 00:07:32.993 Traffic Based Keep ALive: Not Supported 00:07:32.993 Namespace Granularity: Not Supported 00:07:32.993 SQ Associations: Not Supported 00:07:32.993 UUID List: Not Supported 00:07:32.994 Multi-Domain Subsystem: Not Supported 00:07:32.994 Fixed Capacity Management: Not Supported 00:07:32.994 Variable Capacity Management: Not Supported 00:07:32.994 Delete Endurance Group: Not Supported 00:07:32.994 Delete NVM Set: Not Supported 00:07:32.994 Extended LBA Formats Supported: Supported 00:07:32.994 Flexible Data Placement Supported: Supported 00:07:32.994 00:07:32.994 Controller Memory Buffer Support 00:07:32.994 ================================ 00:07:32.994 Supported: No 00:07:32.994 00:07:32.994 Persistent Memory Region Support 00:07:32.994 ================================ 00:07:32.994 Supported: No 00:07:32.994 00:07:32.994 Admin Command Set Attributes 00:07:32.994 ============================ 00:07:32.994 Security Send/Receive: Not Supported 00:07:32.994 Format NVM: Supported 00:07:32.994 Firmware Activate/Download: Not Supported 00:07:32.994 Namespace Management: Supported 00:07:32.994 Device Self-Test: Not Supported 00:07:32.994 Directives: Supported 00:07:32.994 NVMe-MI: Not Supported 00:07:32.994 Virtualization Management: Not Supported 00:07:32.994 Doorbell Buffer Config: Supported 00:07:32.994 Get LBA Status Capability: Not Supported 00:07:32.994 Command & Feature Lockdown Capability: Not Supported 00:07:32.994 Abort Command Limit: 4 00:07:32.994 Async Event Request Limit: 4 00:07:32.994 Number of Firmware Slots: N/A 00:07:32.994 Firmware Slot 1 Read-Only: N/A 00:07:32.994 Firmware Activation Without Reset: N/A 00:07:32.994 Multiple Update Detection Support: N/A 00:07:32.994 Firmware Update Granularity: No Information Provided 00:07:32.994 Per-Namespace SMART Log: Yes 00:07:32.994 Asymmetric Namespace Access Log Page: Not Supported 00:07:32.994 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:32.994 Command Effects Log Page: Supported 00:07:32.994 Get Log Page Extended Data: Supported 00:07:32.994 Telemetry Log Pages: Not Supported 00:07:32.994 Persistent Event Log Pages: Not Supported 00:07:32.994 Supported Log Pages Log Page: May Support 00:07:32.994 Commands Supported & Effects Log Page: Not Supported 00:07:32.994 Feature Identifiers & Effects Log Page:May Support 00:07:32.994 NVMe-MI Commands & Effects Log Page: May Support 00:07:32.994 Data Area 4 for Telemetry Log: Not Supported 00:07:32.994 Error Log Page Entries Supported: 1 00:07:32.994 Keep Alive: Not Supported 00:07:32.994 00:07:32.994 NVM Command Set Attributes 00:07:32.994 ========================== 00:07:32.994 Submission Queue Entry Size 00:07:32.994 Max: 64 00:07:32.994 Min: 64 00:07:32.994 Completion Queue Entry Size 00:07:32.994 Max: 16 00:07:32.994 Min: 16 00:07:32.994 Number of Namespaces: 256 00:07:32.994 Compare Command: Supported 00:07:32.994 Write Uncorrectable Command: Not Supported 00:07:32.994 Dataset Management Command: Supported 00:07:32.994 Write Zeroes Command: Supported 00:07:32.994 Set Features Save Field: Supported 00:07:32.994 Reservations: Not Supported 00:07:32.994 Timestamp: Supported 00:07:32.994 Copy: Supported 00:07:32.994 Volatile Write Cache: Present 00:07:32.994 Atomic Write Unit (Normal): 1 00:07:32.994 Atomic Write Unit (PFail): 1 00:07:32.994 Atomic Compare & Write Unit: 1 00:07:32.994 Fused Compare & Write: Not Supported 00:07:32.994 Scatter-Gather List 00:07:32.994 SGL Command Set: Supported 00:07:32.994 SGL Keyed: Not Supported 00:07:32.994 SGL Bit Bucket Descriptor: Not Supported 00:07:32.994 SGL Metadata Pointer: Not Supported 00:07:32.994 Oversized SGL: Not Supported 00:07:32.994 SGL Metadata Address: Not Supported 00:07:32.994 SGL Offset: Not Supported 00:07:32.994 Transport SGL Data Block: Not Supported 00:07:32.994 Replay Protected Memory Block: Not Supported 00:07:32.994 00:07:32.994 Firmware Slot Information 00:07:32.994 ========================= 00:07:32.994 Active slot: 1 00:07:32.994 Slot 1 Firmware Revision: 1.0 00:07:32.994 00:07:32.994 00:07:32.994 Commands Supported and Effects 00:07:32.994 ============================== 00:07:32.994 Admin Commands 00:07:32.994 -------------- 00:07:32.994 Delete I/O Submission Queue (00h): Supported 00:07:32.994 Create I/O Submission Queue (01h): Supported 00:07:32.994 Get Log Page (02h): Supported 00:07:32.994 Delete I/O Completion Queue (04h): Supported 00:07:32.994 Create I/O Completion Queue (05h): Supported 00:07:32.994 Identify (06h): Supported 00:07:32.994 Abort (08h): Supported 00:07:32.994 Set Features (09h): Supported 00:07:32.994 Get Features (0Ah): Supported 00:07:32.994 Asynchronous Event Request (0Ch): Supported 00:07:32.994 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:32.994 Directive Send (19h): Supported 00:07:32.994 Directive Receive (1Ah): Supported 00:07:32.994 Virtualization Management (1Ch): Supported 00:07:32.994 Doorbell Buffer Config (7Ch): Supported 00:07:32.994 Format NVM (80h): Supported LBA-Change 00:07:32.994 I/O Commands 00:07:32.994 ------------ 00:07:32.994 Flush (00h): Supported LBA-Change 00:07:32.994 Write (01h): Supported LBA-Change 00:07:32.994 Read (02h): Supported 00:07:32.994 Compare (05h): Supported 00:07:32.994 Write Zeroes (08h): Supported LBA-Change 00:07:32.994 Dataset Management (09h): Supported LBA-Change 00:07:32.994 Unknown (0Ch): Supported 00:07:32.994 Unknown (12h): Supported 00:07:32.994 Copy (19h): Supported LBA-Change 00:07:32.994 Unknown (1Dh): Supported LBA-Change 00:07:32.994 00:07:32.994 Error Log 00:07:32.994 ========= 00:07:32.994 00:07:32.994 Arbitration 00:07:32.994 =========== 00:07:32.994 Arbitration Burst: no limit 00:07:32.994 00:07:32.994 Power Management 00:07:32.994 ================ 00:07:32.994 Number of Power States: 1 00:07:32.994 Current Power State: Power State #0 00:07:32.994 Power State #0: 00:07:32.994 Max Power: 25.00 W 00:07:32.994 Non-Operational State: Operational 00:07:32.994 Entry Latency: 16 microseconds 00:07:32.994 Exit Latency: 4 microseconds 00:07:32.994 Relative Read Throughput: 0 00:07:32.994 Relative Read Latency: 0 00:07:32.994 Relative Write Throughput: 0 00:07:32.994 Relative Write Latency: 0 00:07:32.994 Idle Power: Not Reported 00:07:32.994 Active Power: Not Reported 00:07:32.994 Non-Operational Permissive Mode: Not Supported 00:07:32.994 00:07:32.994 Health Information 00:07:32.994 ================== 00:07:32.994 Critical Warnings: 00:07:32.994 Available Spare Space: OK 00:07:32.994 Temperature: OK 00:07:32.994 Device Reliability: OK 00:07:32.994 Read Only: No 00:07:32.994 Volatile Memory Backup: OK 00:07:32.994 Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.994 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:32.994 Available Spare: 0% 00:07:32.994 Available Spare Threshold: 0% 00:07:32.994 Life Percentage Used: 0% 00:07:32.994 Data Units Read: 830 00:07:32.994 Data Units Written: 759 00:07:32.994 Host Read Commands: 37999 00:07:32.994 Host Write Commands: 37422 00:07:32.994 Controller Busy Time: 0 minutes 00:07:32.994 Power Cycles: 0 00:07:32.994 Power On Hours: 0 hours 00:07:32.994 Unsafe Shutdowns: 0 00:07:32.994 Unrecoverable Media Errors: 0 00:07:32.994 Lifetime Error Log Entries: 0 00:07:32.994 Warning Temperature Time: 0 minutes 00:07:32.994 Critical Temperature Time: 0 minutes 00:07:32.994 00:07:32.994 Number of Queues 00:07:32.994 ================ 00:07:32.994 Number of I/O Submission Queues: 64 00:07:32.994 Number of I/O Completion Queues: 64 00:07:32.994 00:07:32.994 ZNS Specific Controller Data 00:07:32.994 ============================ 00:07:32.994 Zone Append Size Limit: 0 00:07:32.994 00:07:32.994 00:07:32.994 Active Namespaces 00:07:32.994 ================= 00:07:32.994 Namespace ID:1 00:07:32.994 Error Recovery Timeout: Unlimited 00:07:32.994 Command Set Identifier: NVM (00h) 00:07:32.994 Deallocate: Supported 00:07:32.994 Deallocated/Unwritten Error: Supported 00:07:32.994 Deallocated Read Value: All 0x00 00:07:32.994 Deallocate in Write Zeroes: Not Supported 00:07:32.994 Deallocated Guard Field: 0xFFFF 00:07:32.994 Flush: Supported 00:07:32.994 Reservation: Not Supported 00:07:32.994 Namespace Sharing Capabilities: Multiple Controllers 00:07:32.994 Size (in LBAs): 262144 (1GiB) 00:07:32.994 Capacity (in LBAs): 262144 (1GiB) 00:07:32.994 Utilization (in LBAs): 262144 (1GiB) 00:07:32.994 Thin Provisioning: Not Supported 00:07:32.994 Per-NS Atomic Units: No 00:07:32.994 Maximum Single Source Range Length: 128 00:07:32.994 Maximum Copy Length: 128 00:07:32.994 Maximum Source Range Count: 128 00:07:32.994 NGUID/EUI64 Never Reused: No 00:07:32.994 Namespace Write Protected: No 00:07:32.994 Endurance group ID: 1 00:07:32.994 Number of LBA Formats: 8 00:07:32.994 Current LBA Format: LBA Format #04 00:07:32.994 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.994 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.994 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.994 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.994 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.994 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.995 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.995 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.995 00:07:32.995 Get Feature FDP: 00:07:32.995 ================ 00:07:32.995 Enabled: Yes 00:07:32.995 FDP configuration index: 0 00:07:32.995 00:07:32.995 FDP configurations log page 00:07:32.995 =========================== 00:07:32.995 Number of FDP configurations: 1 00:07:32.995 Version: 0 00:07:32.995 Size: 112 00:07:32.995 FDP Configuration Descriptor: 0 00:07:32.995 Descriptor Size: 96 00:07:32.995 Reclaim Group Identifier format: 2 00:07:32.995 FDP Volatile Write Cache: Not Present 00:07:32.995 FDP Configuration: Valid 00:07:32.995 Vendor Specific Size: 0 00:07:32.995 Number of Reclaim Groups: 2 00:07:32.995 Number of Recalim Unit Handles: 8 00:07:32.995 Max Placement Identifiers: 128 00:07:32.995 Number of Namespaces Suppprted: 256 00:07:32.995 Reclaim unit Nominal Size: 6000000 bytes 00:07:32.995 Estimated Reclaim Unit Time Limit: Not Reported 00:07:32.995 RUH Desc #000: RUH Type: Initially Isolated 00:07:32.995 RUH Desc #001: RUH Type: Initially Isolated 00:07:32.995 RUH Desc #002: RUH Type: Initially Isolated 00:07:32.995 RUH Desc #003: RUH Type: Initially Isolated 00:07:32.995 RUH Desc #004: RUH Type: Initially Isolated 00:07:32.995 RUH Desc #005: RUH Type: Initially Isolated 00:07:32.995 RUH Desc #006: RUH Type: Initially Isolated 00:07:32.995 RUH Desc #007: RUH Type: Initially Isolated 00:07:32.995 00:07:32.995 FDP reclaim unit handle usage log page 00:07:32.995 ====================================== 00:07:32.995 Number of Reclaim Unit Handles: 8 00:07:32.995 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:32.995 RUH Usage Desc #001: RUH Attributes: Unused 00:07:32.995 RUH Usage Desc #002: RUH Attributes: Unused 00:07:32.995 RUH Usage Desc #003: RUH Attributes: Unused 00:07:32.995 RUH Usage Desc #004: RUH Attributes: Unused 00:07:32.995 RUH Usage Desc #005: RUH Attributes: Unused 00:07:32.995 RUH Usage Desc #006: RUH Attributes: Unused 00:07:32.995 RUH Usage Desc #007: RUH Attributes: Unused 00:07:32.995 00:07:32.995 FDP statistics log page 00:07:32.995 ======================= 00:07:32.995 Host bytes with metadata written: 490708992 00:07:32.995 Media bytes with metadata written: 490754048 00:07:32.995 Media bytes erased: 0 00:07:32.995 00:07:32.995 FDP events log page 00:07:32.995 =================== 00:07:32.995 Number of FDP events: 0 00:07:32.995 00:07:32.995 NVM Specific Namespace Data 00:07:32.995 =========================== 00:07:32.995 Logical Block Storage Tag Mask: 0 00:07:32.995 Protection Information Capabilities: 00:07:32.995 16b Guard Protection Information Storage Tag Support: No 00:07:32.995 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.995 Storage Tag Check Read Support: No 00:07:32.995 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.995 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.995 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.995 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.995 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.995 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.995 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.995 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.995 00:07:32.995 real 0m1.195s 00:07:32.995 user 0m0.456s 00:07:32.995 sys 0m0.516s 00:07:32.995 12:19:39 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.995 12:19:39 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:32.995 ************************************ 00:07:32.995 END TEST nvme_identify 00:07:32.995 ************************************ 00:07:32.995 12:19:40 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:32.995 12:19:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.995 12:19:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.995 12:19:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:32.995 ************************************ 00:07:32.995 START TEST nvme_perf 00:07:32.995 ************************************ 00:07:32.995 12:19:40 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:32.995 12:19:40 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:34.372 Initializing NVMe Controllers 00:07:34.372 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:34.372 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:34.372 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:34.372 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:34.372 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:34.372 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:34.372 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:34.372 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:34.372 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:34.372 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:34.372 Initialization complete. Launching workers. 00:07:34.372 ======================================================== 00:07:34.372 Latency(us) 00:07:34.372 Device Information : IOPS MiB/s Average min max 00:07:34.372 PCIE (0000:00:10.0) NSID 1 from core 0: 20172.92 236.40 6352.86 5403.52 29821.56 00:07:34.372 PCIE (0000:00:11.0) NSID 1 from core 0: 20172.92 236.40 6344.37 5492.74 28138.15 00:07:34.372 PCIE (0000:00:13.0) NSID 1 from core 0: 20172.92 236.40 6334.45 5478.68 26826.75 00:07:34.372 PCIE (0000:00:12.0) NSID 1 from core 0: 20172.92 236.40 6324.37 5494.75 25088.85 00:07:34.372 PCIE (0000:00:12.0) NSID 2 from core 0: 20172.92 236.40 6314.67 5474.95 23411.46 00:07:34.372 PCIE (0000:00:12.0) NSID 3 from core 0: 20172.92 236.40 6304.63 5510.58 21737.81 00:07:34.372 ======================================================== 00:07:34.372 Total : 121037.53 1418.41 6329.22 5403.52 29821.56 00:07:34.372 00:07:34.372 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:34.372 ================================================================================= 00:07:34.372 1.00000% : 5520.148us 00:07:34.372 10.00000% : 5646.178us 00:07:34.372 25.00000% : 5822.622us 00:07:34.372 50.00000% : 6099.889us 00:07:34.372 75.00000% : 6377.157us 00:07:34.372 90.00000% : 6604.012us 00:07:34.372 95.00000% : 7813.908us 00:07:34.372 98.00000% : 10082.462us 00:07:34.372 99.00000% : 11443.594us 00:07:34.372 99.50000% : 23794.609us 00:07:34.372 99.90000% : 29440.788us 00:07:34.372 99.99000% : 29844.086us 00:07:34.372 99.99900% : 29844.086us 00:07:34.372 99.99990% : 29844.086us 00:07:34.372 99.99999% : 29844.086us 00:07:34.372 00:07:34.372 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:34.372 ================================================================================= 00:07:34.372 1.00000% : 5595.766us 00:07:34.372 10.00000% : 5721.797us 00:07:34.372 25.00000% : 5847.828us 00:07:34.372 50.00000% : 6099.889us 00:07:34.372 75.00000% : 6326.745us 00:07:34.372 90.00000% : 6553.600us 00:07:34.372 95.00000% : 7713.083us 00:07:34.372 98.00000% : 10032.049us 00:07:34.372 99.00000% : 11191.532us 00:07:34.372 99.50000% : 22584.714us 00:07:34.372 99.90000% : 27827.594us 00:07:34.372 99.99000% : 28230.892us 00:07:34.372 99.99900% : 28230.892us 00:07:34.372 99.99990% : 28230.892us 00:07:34.372 99.99999% : 28230.892us 00:07:34.372 00:07:34.372 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:34.372 ================================================================================= 00:07:34.372 1.00000% : 5595.766us 00:07:34.372 10.00000% : 5721.797us 00:07:34.372 25.00000% : 5873.034us 00:07:34.372 50.00000% : 6099.889us 00:07:34.372 75.00000% : 6326.745us 00:07:34.372 90.00000% : 6553.600us 00:07:34.372 95.00000% : 7763.495us 00:07:34.372 98.00000% : 9880.812us 00:07:34.372 99.00000% : 11191.532us 00:07:34.372 99.50000% : 21173.169us 00:07:34.372 99.90000% : 26416.049us 00:07:34.372 99.99000% : 26819.348us 00:07:34.372 99.99900% : 27020.997us 00:07:34.372 99.99990% : 27020.997us 00:07:34.372 99.99999% : 27020.997us 00:07:34.372 00:07:34.372 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:34.372 ================================================================================= 00:07:34.372 1.00000% : 5595.766us 00:07:34.372 10.00000% : 5721.797us 00:07:34.372 25.00000% : 5847.828us 00:07:34.372 50.00000% : 6099.889us 00:07:34.372 75.00000% : 6326.745us 00:07:34.372 90.00000% : 6553.600us 00:07:34.372 95.00000% : 7763.495us 00:07:34.372 98.00000% : 9931.225us 00:07:34.372 99.00000% : 11241.945us 00:07:34.372 99.50000% : 19358.326us 00:07:34.372 99.90000% : 24601.206us 00:07:34.372 99.99000% : 25105.329us 00:07:34.372 99.99900% : 25105.329us 00:07:34.372 99.99990% : 25105.329us 00:07:34.372 99.99999% : 25105.329us 00:07:34.372 00:07:34.372 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:34.372 ================================================================================= 00:07:34.372 1.00000% : 5595.766us 00:07:34.372 10.00000% : 5721.797us 00:07:34.372 25.00000% : 5873.034us 00:07:34.372 50.00000% : 6099.889us 00:07:34.372 75.00000% : 6326.745us 00:07:34.372 90.00000% : 6553.600us 00:07:34.372 95.00000% : 7864.320us 00:07:34.372 98.00000% : 10032.049us 00:07:34.372 99.00000% : 11443.594us 00:07:34.372 99.50000% : 17644.308us 00:07:34.372 99.90000% : 22988.012us 00:07:34.372 99.99000% : 23391.311us 00:07:34.372 99.99900% : 23492.135us 00:07:34.372 99.99990% : 23492.135us 00:07:34.372 99.99999% : 23492.135us 00:07:34.372 00:07:34.373 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:34.373 ================================================================================= 00:07:34.373 1.00000% : 5595.766us 00:07:34.373 10.00000% : 5721.797us 00:07:34.373 25.00000% : 5847.828us 00:07:34.373 50.00000% : 6099.889us 00:07:34.373 75.00000% : 6326.745us 00:07:34.373 90.00000% : 6553.600us 00:07:34.373 95.00000% : 7864.320us 00:07:34.373 98.00000% : 10183.286us 00:07:34.373 99.00000% : 11645.243us 00:07:34.373 99.50000% : 15930.289us 00:07:34.373 99.90000% : 21273.994us 00:07:34.373 99.99000% : 21778.117us 00:07:34.373 99.99900% : 21778.117us 00:07:34.373 99.99990% : 21778.117us 00:07:34.373 99.99999% : 21778.117us 00:07:34.373 00:07:34.373 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:34.373 ============================================================================== 00:07:34.373 Range in us Cumulative IO count 00:07:34.373 5394.117 - 5419.323: 0.0247% ( 5) 00:07:34.373 5419.323 - 5444.529: 0.0742% ( 10) 00:07:34.373 5444.529 - 5469.735: 0.3066% ( 47) 00:07:34.373 5469.735 - 5494.942: 0.9197% ( 124) 00:07:34.373 5494.942 - 5520.148: 2.0916% ( 237) 00:07:34.373 5520.148 - 5545.354: 3.5750% ( 300) 00:07:34.373 5545.354 - 5570.560: 5.2364% ( 336) 00:07:34.373 5570.560 - 5595.766: 7.0906% ( 375) 00:07:34.373 5595.766 - 5620.972: 9.0684% ( 400) 00:07:34.373 5620.972 - 5646.178: 11.1056% ( 412) 00:07:34.373 5646.178 - 5671.385: 13.1873% ( 421) 00:07:34.373 5671.385 - 5696.591: 15.2987% ( 427) 00:07:34.373 5696.591 - 5721.797: 17.3952% ( 424) 00:07:34.373 5721.797 - 5747.003: 19.5263% ( 431) 00:07:34.373 5747.003 - 5772.209: 21.7464% ( 449) 00:07:34.373 5772.209 - 5797.415: 23.9072% ( 437) 00:07:34.373 5797.415 - 5822.622: 26.0977% ( 443) 00:07:34.373 5822.622 - 5847.828: 28.3722% ( 460) 00:07:34.373 5847.828 - 5873.034: 30.6863% ( 468) 00:07:34.373 5873.034 - 5898.240: 32.9361% ( 455) 00:07:34.373 5898.240 - 5923.446: 35.2205% ( 462) 00:07:34.373 5923.446 - 5948.652: 37.5989% ( 481) 00:07:34.373 5948.652 - 5973.858: 39.8685% ( 459) 00:07:34.373 5973.858 - 5999.065: 42.1133% ( 454) 00:07:34.373 5999.065 - 6024.271: 44.3780% ( 458) 00:07:34.373 6024.271 - 6049.477: 46.6278% ( 455) 00:07:34.373 6049.477 - 6074.683: 49.0605% ( 492) 00:07:34.373 6074.683 - 6099.889: 51.3301% ( 459) 00:07:34.373 6099.889 - 6125.095: 53.6936% ( 478) 00:07:34.373 6125.095 - 6150.302: 55.9385% ( 454) 00:07:34.373 6150.302 - 6175.508: 58.2526% ( 468) 00:07:34.373 6175.508 - 6200.714: 60.5568% ( 466) 00:07:34.373 6200.714 - 6225.920: 62.8758% ( 469) 00:07:34.373 6225.920 - 6251.126: 65.1553% ( 461) 00:07:34.373 6251.126 - 6276.332: 67.5435% ( 483) 00:07:34.373 6276.332 - 6301.538: 69.8527% ( 467) 00:07:34.373 6301.538 - 6326.745: 72.1964% ( 474) 00:07:34.373 6326.745 - 6351.951: 74.4511% ( 456) 00:07:34.373 6351.951 - 6377.157: 76.7108% ( 457) 00:07:34.373 6377.157 - 6402.363: 79.1584% ( 495) 00:07:34.373 6402.363 - 6427.569: 81.3192% ( 437) 00:07:34.373 6427.569 - 6452.775: 83.6828% ( 478) 00:07:34.373 6452.775 - 6503.188: 87.0896% ( 689) 00:07:34.373 6503.188 - 6553.600: 89.1070% ( 408) 00:07:34.373 6553.600 - 6604.012: 90.3481% ( 251) 00:07:34.373 6604.012 - 6654.425: 91.1145% ( 155) 00:07:34.373 6654.425 - 6704.837: 91.5843% ( 95) 00:07:34.373 6704.837 - 6755.249: 91.9600% ( 76) 00:07:34.373 6755.249 - 6805.662: 92.2369% ( 56) 00:07:34.373 6805.662 - 6856.074: 92.4545% ( 44) 00:07:34.373 6856.074 - 6906.486: 92.6375% ( 37) 00:07:34.373 6906.486 - 6956.898: 92.8155% ( 36) 00:07:34.373 6956.898 - 7007.311: 92.9688% ( 31) 00:07:34.373 7007.311 - 7057.723: 93.1319% ( 33) 00:07:34.373 7057.723 - 7108.135: 93.2555% ( 25) 00:07:34.373 7108.135 - 7158.548: 93.3742% ( 24) 00:07:34.373 7158.548 - 7208.960: 93.5176% ( 29) 00:07:34.373 7208.960 - 7259.372: 93.6561% ( 28) 00:07:34.373 7259.372 - 7309.785: 93.8143% ( 32) 00:07:34.373 7309.785 - 7360.197: 93.9577% ( 29) 00:07:34.373 7360.197 - 7410.609: 94.0912% ( 27) 00:07:34.373 7410.609 - 7461.022: 94.2197% ( 26) 00:07:34.373 7461.022 - 7511.434: 94.3285% ( 22) 00:07:34.373 7511.434 - 7561.846: 94.4670% ( 28) 00:07:34.373 7561.846 - 7612.258: 94.5560% ( 18) 00:07:34.373 7612.258 - 7662.671: 94.6796% ( 25) 00:07:34.373 7662.671 - 7713.083: 94.7884% ( 22) 00:07:34.373 7713.083 - 7763.495: 94.9120% ( 25) 00:07:34.373 7763.495 - 7813.908: 95.0455% ( 27) 00:07:34.373 7813.908 - 7864.320: 95.1642% ( 24) 00:07:34.373 7864.320 - 7914.732: 95.2729% ( 22) 00:07:34.373 7914.732 - 7965.145: 95.3669% ( 19) 00:07:34.373 7965.145 - 8015.557: 95.4905% ( 25) 00:07:34.373 8015.557 - 8065.969: 95.5894% ( 20) 00:07:34.373 8065.969 - 8116.382: 95.6833% ( 19) 00:07:34.373 8116.382 - 8166.794: 95.7872% ( 21) 00:07:34.373 8166.794 - 8217.206: 95.8663% ( 16) 00:07:34.373 8217.206 - 8267.618: 95.9355% ( 14) 00:07:34.373 8267.618 - 8318.031: 96.0047% ( 14) 00:07:34.373 8318.031 - 8368.443: 96.0789% ( 15) 00:07:34.373 8368.443 - 8418.855: 96.1531% ( 15) 00:07:34.373 8418.855 - 8469.268: 96.2124% ( 12) 00:07:34.373 8469.268 - 8519.680: 96.2767% ( 13) 00:07:34.373 8519.680 - 8570.092: 96.3509% ( 15) 00:07:34.373 8570.092 - 8620.505: 96.4201% ( 14) 00:07:34.373 8620.505 - 8670.917: 96.4943% ( 15) 00:07:34.373 8670.917 - 8721.329: 96.5684% ( 15) 00:07:34.373 8721.329 - 8771.742: 96.6475% ( 16) 00:07:34.373 8771.742 - 8822.154: 96.7019% ( 11) 00:07:34.373 8822.154 - 8872.566: 96.7563% ( 11) 00:07:34.373 8872.566 - 8922.978: 96.8354% ( 16) 00:07:34.373 8922.978 - 8973.391: 96.9195% ( 17) 00:07:34.373 8973.391 - 9023.803: 96.9986% ( 16) 00:07:34.373 9023.803 - 9074.215: 97.0431% ( 9) 00:07:34.373 9074.215 - 9124.628: 97.1123% ( 14) 00:07:34.373 9124.628 - 9175.040: 97.1816% ( 14) 00:07:34.373 9175.040 - 9225.452: 97.2409% ( 12) 00:07:34.373 9225.452 - 9275.865: 97.3101% ( 14) 00:07:34.373 9275.865 - 9326.277: 97.3744% ( 13) 00:07:34.373 9326.277 - 9376.689: 97.4239% ( 10) 00:07:34.373 9376.689 - 9427.102: 97.4782% ( 11) 00:07:34.373 9427.102 - 9477.514: 97.5326% ( 11) 00:07:34.373 9477.514 - 9527.926: 97.5870% ( 11) 00:07:34.373 9527.926 - 9578.338: 97.6315% ( 9) 00:07:34.373 9578.338 - 9628.751: 97.6612% ( 6) 00:07:34.373 9628.751 - 9679.163: 97.7008% ( 8) 00:07:34.373 9679.163 - 9729.575: 97.7403% ( 8) 00:07:34.373 9729.575 - 9779.988: 97.7848% ( 9) 00:07:34.373 9779.988 - 9830.400: 97.8244% ( 8) 00:07:34.373 9830.400 - 9880.812: 97.8639% ( 8) 00:07:34.373 9880.812 - 9931.225: 97.9084% ( 9) 00:07:34.373 9931.225 - 9981.637: 97.9430% ( 7) 00:07:34.373 9981.637 - 10032.049: 97.9826% ( 8) 00:07:34.373 10032.049 - 10082.462: 98.0222% ( 8) 00:07:34.373 10082.462 - 10132.874: 98.0667% ( 9) 00:07:34.373 10132.874 - 10183.286: 98.1112% ( 9) 00:07:34.373 10183.286 - 10233.698: 98.1458% ( 7) 00:07:34.373 10233.698 - 10284.111: 98.1903% ( 9) 00:07:34.373 10284.111 - 10334.523: 98.2298% ( 8) 00:07:34.373 10334.523 - 10384.935: 98.2694% ( 8) 00:07:34.373 10384.935 - 10435.348: 98.3139% ( 9) 00:07:34.373 10435.348 - 10485.760: 98.3485% ( 7) 00:07:34.373 10485.760 - 10536.172: 98.4078% ( 12) 00:07:34.373 10536.172 - 10586.585: 98.4523% ( 9) 00:07:34.373 10586.585 - 10636.997: 98.5018% ( 10) 00:07:34.373 10636.997 - 10687.409: 98.5463% ( 9) 00:07:34.373 10687.409 - 10737.822: 98.5809% ( 7) 00:07:34.373 10737.822 - 10788.234: 98.6205% ( 8) 00:07:34.373 10788.234 - 10838.646: 98.6650% ( 9) 00:07:34.373 10838.646 - 10889.058: 98.7045% ( 8) 00:07:34.373 10889.058 - 10939.471: 98.7292% ( 5) 00:07:34.373 10939.471 - 10989.883: 98.7589% ( 6) 00:07:34.373 10989.883 - 11040.295: 98.7836% ( 5) 00:07:34.373 11040.295 - 11090.708: 98.8034% ( 4) 00:07:34.373 11090.708 - 11141.120: 98.8331% ( 6) 00:07:34.373 11141.120 - 11191.532: 98.8726% ( 8) 00:07:34.373 11191.532 - 11241.945: 98.9122% ( 8) 00:07:34.373 11241.945 - 11292.357: 98.9468% ( 7) 00:07:34.373 11292.357 - 11342.769: 98.9616% ( 3) 00:07:34.373 11342.769 - 11393.182: 98.9913% ( 6) 00:07:34.373 11393.182 - 11443.594: 99.0111% ( 4) 00:07:34.373 11443.594 - 11494.006: 99.0457% ( 7) 00:07:34.373 11494.006 - 11544.418: 99.0704% ( 5) 00:07:34.373 11544.418 - 11594.831: 99.0951% ( 5) 00:07:34.373 11594.831 - 11645.243: 99.1149% ( 4) 00:07:34.373 11645.243 - 11695.655: 99.1396% ( 5) 00:07:34.373 11695.655 - 11746.068: 99.1693% ( 6) 00:07:34.373 11746.068 - 11796.480: 99.1940% ( 5) 00:07:34.373 11796.480 - 11846.892: 99.2188% ( 5) 00:07:34.373 11846.892 - 11897.305: 99.2435% ( 5) 00:07:34.373 11897.305 - 11947.717: 99.2682% ( 5) 00:07:34.373 11947.717 - 11998.129: 99.2880% ( 4) 00:07:34.373 11998.129 - 12048.542: 99.3176% ( 6) 00:07:34.373 12048.542 - 12098.954: 99.3424% ( 5) 00:07:34.373 12098.954 - 12149.366: 99.3621% ( 4) 00:07:34.373 12149.366 - 12199.778: 99.3671% ( 1) 00:07:34.373 22887.188 - 22988.012: 99.3720% ( 1) 00:07:34.373 22988.012 - 23088.837: 99.3819% ( 2) 00:07:34.373 23088.837 - 23189.662: 99.3968% ( 3) 00:07:34.373 23189.662 - 23290.486: 99.4215% ( 5) 00:07:34.373 23290.486 - 23391.311: 99.4363% ( 3) 00:07:34.373 23391.311 - 23492.135: 99.4610% ( 5) 00:07:34.373 23492.135 - 23592.960: 99.4759% ( 3) 00:07:34.373 23592.960 - 23693.785: 99.4956% ( 4) 00:07:34.373 23693.785 - 23794.609: 99.5154% ( 4) 00:07:34.373 23794.609 - 23895.434: 99.5352% ( 4) 00:07:34.373 23895.434 - 23996.258: 99.5500% ( 3) 00:07:34.373 23996.258 - 24097.083: 99.5599% ( 2) 00:07:34.374 24097.083 - 24197.908: 99.5847% ( 5) 00:07:34.374 24197.908 - 24298.732: 99.6044% ( 4) 00:07:34.374 24298.732 - 24399.557: 99.6193% ( 3) 00:07:34.374 24399.557 - 24500.382: 99.6390% ( 4) 00:07:34.374 24500.382 - 24601.206: 99.6638% ( 5) 00:07:34.374 24601.206 - 24702.031: 99.6786% ( 3) 00:07:34.374 24702.031 - 24802.855: 99.6835% ( 1) 00:07:34.374 28029.243 - 28230.892: 99.7083% ( 5) 00:07:34.374 28230.892 - 28432.542: 99.7478% ( 8) 00:07:34.374 28432.542 - 28634.191: 99.7874% ( 8) 00:07:34.374 28634.191 - 28835.840: 99.8220% ( 7) 00:07:34.374 28835.840 - 29037.489: 99.8566% ( 7) 00:07:34.374 29037.489 - 29239.138: 99.8863% ( 6) 00:07:34.374 29239.138 - 29440.788: 99.9308% ( 9) 00:07:34.374 29440.788 - 29642.437: 99.9654% ( 7) 00:07:34.374 29642.437 - 29844.086: 100.0000% ( 7) 00:07:34.374 00:07:34.374 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:34.374 ============================================================================== 00:07:34.374 Range in us Cumulative IO count 00:07:34.374 5469.735 - 5494.942: 0.0099% ( 2) 00:07:34.374 5494.942 - 5520.148: 0.0742% ( 13) 00:07:34.374 5520.148 - 5545.354: 0.3115% ( 48) 00:07:34.374 5545.354 - 5570.560: 0.8604% ( 111) 00:07:34.374 5570.560 - 5595.766: 1.9680% ( 224) 00:07:34.374 5595.766 - 5620.972: 3.5156% ( 313) 00:07:34.374 5620.972 - 5646.178: 5.3451% ( 370) 00:07:34.374 5646.178 - 5671.385: 7.4120% ( 418) 00:07:34.374 5671.385 - 5696.591: 9.6272% ( 448) 00:07:34.374 5696.591 - 5721.797: 11.9907% ( 478) 00:07:34.374 5721.797 - 5747.003: 14.4333% ( 494) 00:07:34.374 5747.003 - 5772.209: 16.9699% ( 513) 00:07:34.374 5772.209 - 5797.415: 19.5856% ( 529) 00:07:34.374 5797.415 - 5822.622: 22.2261% ( 534) 00:07:34.374 5822.622 - 5847.828: 25.0099% ( 563) 00:07:34.374 5847.828 - 5873.034: 27.7047% ( 545) 00:07:34.374 5873.034 - 5898.240: 30.4193% ( 549) 00:07:34.374 5898.240 - 5923.446: 33.1735% ( 557) 00:07:34.374 5923.446 - 5948.652: 35.9128% ( 554) 00:07:34.374 5948.652 - 5973.858: 38.6917% ( 562) 00:07:34.374 5973.858 - 5999.065: 41.3815% ( 544) 00:07:34.374 5999.065 - 6024.271: 44.1208% ( 554) 00:07:34.374 6024.271 - 6049.477: 46.8849% ( 559) 00:07:34.374 6049.477 - 6074.683: 49.5995% ( 549) 00:07:34.374 6074.683 - 6099.889: 52.3091% ( 548) 00:07:34.374 6099.889 - 6125.095: 55.0583% ( 556) 00:07:34.374 6125.095 - 6150.302: 57.7878% ( 552) 00:07:34.374 6150.302 - 6175.508: 60.5024% ( 549) 00:07:34.374 6175.508 - 6200.714: 63.2120% ( 548) 00:07:34.374 6200.714 - 6225.920: 65.9266% ( 549) 00:07:34.374 6225.920 - 6251.126: 68.7154% ( 564) 00:07:34.374 6251.126 - 6276.332: 71.5091% ( 565) 00:07:34.374 6276.332 - 6301.538: 74.2385% ( 552) 00:07:34.374 6301.538 - 6326.745: 76.9680% ( 552) 00:07:34.374 6326.745 - 6351.951: 79.6875% ( 550) 00:07:34.374 6351.951 - 6377.157: 82.2093% ( 510) 00:07:34.374 6377.157 - 6402.363: 84.4294% ( 449) 00:07:34.374 6402.363 - 6427.569: 86.1452% ( 347) 00:07:34.374 6427.569 - 6452.775: 87.4110% ( 256) 00:07:34.374 6452.775 - 6503.188: 89.1070% ( 343) 00:07:34.374 6503.188 - 6553.600: 90.1206% ( 205) 00:07:34.374 6553.600 - 6604.012: 90.8475% ( 147) 00:07:34.374 6604.012 - 6654.425: 91.3271% ( 97) 00:07:34.374 6654.425 - 6704.837: 91.6881% ( 73) 00:07:34.374 6704.837 - 6755.249: 91.9600% ( 55) 00:07:34.374 6755.249 - 6805.662: 92.1578% ( 40) 00:07:34.374 6805.662 - 6856.074: 92.3309% ( 35) 00:07:34.374 6856.074 - 6906.486: 92.5435% ( 43) 00:07:34.374 6906.486 - 6956.898: 92.7265% ( 37) 00:07:34.374 6956.898 - 7007.311: 92.8946% ( 34) 00:07:34.374 7007.311 - 7057.723: 93.0380% ( 29) 00:07:34.374 7057.723 - 7108.135: 93.2011% ( 33) 00:07:34.374 7108.135 - 7158.548: 93.3544% ( 31) 00:07:34.374 7158.548 - 7208.960: 93.5275% ( 35) 00:07:34.374 7208.960 - 7259.372: 93.6857% ( 32) 00:07:34.374 7259.372 - 7309.785: 93.8291% ( 29) 00:07:34.374 7309.785 - 7360.197: 93.9626% ( 27) 00:07:34.374 7360.197 - 7410.609: 94.1011% ( 28) 00:07:34.374 7410.609 - 7461.022: 94.2494% ( 30) 00:07:34.374 7461.022 - 7511.434: 94.4076% ( 32) 00:07:34.374 7511.434 - 7561.846: 94.5708% ( 33) 00:07:34.374 7561.846 - 7612.258: 94.7290% ( 32) 00:07:34.374 7612.258 - 7662.671: 94.8823% ( 31) 00:07:34.374 7662.671 - 7713.083: 95.0208% ( 28) 00:07:34.374 7713.083 - 7763.495: 95.1543% ( 27) 00:07:34.374 7763.495 - 7813.908: 95.2977% ( 29) 00:07:34.374 7813.908 - 7864.320: 95.4213% ( 25) 00:07:34.374 7864.320 - 7914.732: 95.5498% ( 26) 00:07:34.374 7914.732 - 7965.145: 95.6586% ( 22) 00:07:34.374 7965.145 - 8015.557: 95.7476% ( 18) 00:07:34.374 8015.557 - 8065.969: 95.8317% ( 17) 00:07:34.374 8065.969 - 8116.382: 95.9059% ( 15) 00:07:34.374 8116.382 - 8166.794: 95.9652% ( 12) 00:07:34.374 8166.794 - 8217.206: 96.0344% ( 14) 00:07:34.374 8217.206 - 8267.618: 96.0839% ( 10) 00:07:34.374 8267.618 - 8318.031: 96.1135% ( 6) 00:07:34.374 8318.031 - 8368.443: 96.1432% ( 6) 00:07:34.374 8368.443 - 8418.855: 96.1926% ( 10) 00:07:34.374 8418.855 - 8469.268: 96.2520% ( 12) 00:07:34.374 8469.268 - 8519.680: 96.3212% ( 14) 00:07:34.374 8519.680 - 8570.092: 96.3855% ( 13) 00:07:34.374 8570.092 - 8620.505: 96.4448% ( 12) 00:07:34.374 8620.505 - 8670.917: 96.5042% ( 12) 00:07:34.374 8670.917 - 8721.329: 96.5635% ( 12) 00:07:34.374 8721.329 - 8771.742: 96.6179% ( 11) 00:07:34.374 8771.742 - 8822.154: 96.6772% ( 12) 00:07:34.374 8822.154 - 8872.566: 96.7316% ( 11) 00:07:34.374 8872.566 - 8922.978: 96.7860% ( 11) 00:07:34.374 8922.978 - 8973.391: 96.8453% ( 12) 00:07:34.374 8973.391 - 9023.803: 96.9047% ( 12) 00:07:34.374 9023.803 - 9074.215: 96.9838% ( 16) 00:07:34.374 9074.215 - 9124.628: 97.0629% ( 16) 00:07:34.374 9124.628 - 9175.040: 97.1470% ( 17) 00:07:34.374 9175.040 - 9225.452: 97.2360% ( 18) 00:07:34.374 9225.452 - 9275.865: 97.2953% ( 12) 00:07:34.374 9275.865 - 9326.277: 97.3398% ( 9) 00:07:34.374 9326.277 - 9376.689: 97.3744% ( 7) 00:07:34.374 9376.689 - 9427.102: 97.4140% ( 8) 00:07:34.374 9427.102 - 9477.514: 97.4486% ( 7) 00:07:34.374 9477.514 - 9527.926: 97.4881% ( 8) 00:07:34.374 9527.926 - 9578.338: 97.5277% ( 8) 00:07:34.374 9578.338 - 9628.751: 97.5574% ( 6) 00:07:34.374 9628.751 - 9679.163: 97.6019% ( 9) 00:07:34.374 9679.163 - 9729.575: 97.6661% ( 13) 00:07:34.374 9729.575 - 9779.988: 97.7304% ( 13) 00:07:34.374 9779.988 - 9830.400: 97.7996% ( 14) 00:07:34.374 9830.400 - 9880.812: 97.8639% ( 13) 00:07:34.374 9880.812 - 9931.225: 97.9233% ( 12) 00:07:34.374 9931.225 - 9981.637: 97.9777% ( 11) 00:07:34.374 9981.637 - 10032.049: 98.0271% ( 10) 00:07:34.374 10032.049 - 10082.462: 98.0716% ( 9) 00:07:34.374 10082.462 - 10132.874: 98.1112% ( 8) 00:07:34.374 10132.874 - 10183.286: 98.1507% ( 8) 00:07:34.374 10183.286 - 10233.698: 98.1952% ( 9) 00:07:34.374 10233.698 - 10284.111: 98.2397% ( 9) 00:07:34.374 10284.111 - 10334.523: 98.2842% ( 9) 00:07:34.374 10334.523 - 10384.935: 98.3436% ( 12) 00:07:34.374 10384.935 - 10435.348: 98.4078% ( 13) 00:07:34.374 10435.348 - 10485.760: 98.4622% ( 11) 00:07:34.374 10485.760 - 10536.172: 98.5166% ( 11) 00:07:34.374 10536.172 - 10586.585: 98.5611% ( 9) 00:07:34.374 10586.585 - 10636.997: 98.6007% ( 8) 00:07:34.374 10636.997 - 10687.409: 98.6452% ( 9) 00:07:34.374 10687.409 - 10737.822: 98.6847% ( 8) 00:07:34.374 10737.822 - 10788.234: 98.7193% ( 7) 00:07:34.374 10788.234 - 10838.646: 98.7638% ( 9) 00:07:34.374 10838.646 - 10889.058: 98.8083% ( 9) 00:07:34.374 10889.058 - 10939.471: 98.8528% ( 9) 00:07:34.374 10939.471 - 10989.883: 98.8924% ( 8) 00:07:34.374 10989.883 - 11040.295: 98.9320% ( 8) 00:07:34.374 11040.295 - 11090.708: 98.9616% ( 6) 00:07:34.374 11090.708 - 11141.120: 98.9814% ( 4) 00:07:34.374 11141.120 - 11191.532: 99.0160% ( 7) 00:07:34.374 11191.532 - 11241.945: 99.0407% ( 5) 00:07:34.374 11241.945 - 11292.357: 99.0704% ( 6) 00:07:34.374 11292.357 - 11342.769: 99.1050% ( 7) 00:07:34.374 11342.769 - 11393.182: 99.1248% ( 4) 00:07:34.374 11393.182 - 11443.594: 99.1446% ( 4) 00:07:34.374 11443.594 - 11494.006: 99.1594% ( 3) 00:07:34.374 11494.006 - 11544.418: 99.1792% ( 4) 00:07:34.374 11544.418 - 11594.831: 99.1990% ( 4) 00:07:34.374 11594.831 - 11645.243: 99.2188% ( 4) 00:07:34.374 11645.243 - 11695.655: 99.2385% ( 4) 00:07:34.374 11695.655 - 11746.068: 99.2583% ( 4) 00:07:34.374 11746.068 - 11796.480: 99.2731% ( 3) 00:07:34.374 11796.480 - 11846.892: 99.2929% ( 4) 00:07:34.374 11846.892 - 11897.305: 99.3127% ( 4) 00:07:34.374 11897.305 - 11947.717: 99.3325% ( 4) 00:07:34.374 11947.717 - 11998.129: 99.3523% ( 4) 00:07:34.374 11998.129 - 12048.542: 99.3671% ( 3) 00:07:34.374 21778.117 - 21878.942: 99.3770% ( 2) 00:07:34.375 21878.942 - 21979.766: 99.3968% ( 4) 00:07:34.375 21979.766 - 22080.591: 99.4165% ( 4) 00:07:34.375 22080.591 - 22181.415: 99.4363% ( 4) 00:07:34.375 22181.415 - 22282.240: 99.4462% ( 2) 00:07:34.375 22282.240 - 22383.065: 99.4660% ( 4) 00:07:34.375 22383.065 - 22483.889: 99.4858% ( 4) 00:07:34.375 22483.889 - 22584.714: 99.5105% ( 5) 00:07:34.375 22584.714 - 22685.538: 99.5303% ( 4) 00:07:34.375 22685.538 - 22786.363: 99.5451% ( 3) 00:07:34.375 22786.363 - 22887.188: 99.5698% ( 5) 00:07:34.375 22887.188 - 22988.012: 99.5896% ( 4) 00:07:34.375 22988.012 - 23088.837: 99.6094% ( 4) 00:07:34.375 23088.837 - 23189.662: 99.6242% ( 3) 00:07:34.375 23189.662 - 23290.486: 99.6489% ( 5) 00:07:34.375 23290.486 - 23391.311: 99.6638% ( 3) 00:07:34.375 23391.311 - 23492.135: 99.6835% ( 4) 00:07:34.375 26416.049 - 26617.698: 99.6934% ( 2) 00:07:34.375 26617.698 - 26819.348: 99.7330% ( 8) 00:07:34.375 26819.348 - 27020.997: 99.7725% ( 8) 00:07:34.375 27020.997 - 27222.646: 99.8121% ( 8) 00:07:34.375 27222.646 - 27424.295: 99.8566% ( 9) 00:07:34.375 27424.295 - 27625.945: 99.8962% ( 8) 00:07:34.375 27625.945 - 27827.594: 99.9357% ( 8) 00:07:34.375 27827.594 - 28029.243: 99.9753% ( 8) 00:07:34.375 28029.243 - 28230.892: 100.0000% ( 5) 00:07:34.375 00:07:34.375 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:34.375 ============================================================================== 00:07:34.375 Range in us Cumulative IO count 00:07:34.375 5469.735 - 5494.942: 0.0099% ( 2) 00:07:34.375 5494.942 - 5520.148: 0.0544% ( 9) 00:07:34.375 5520.148 - 5545.354: 0.2373% ( 37) 00:07:34.375 5545.354 - 5570.560: 0.7812% ( 110) 00:07:34.375 5570.560 - 5595.766: 1.9235% ( 231) 00:07:34.375 5595.766 - 5620.972: 3.4662% ( 312) 00:07:34.375 5620.972 - 5646.178: 5.3649% ( 384) 00:07:34.375 5646.178 - 5671.385: 7.4367% ( 419) 00:07:34.375 5671.385 - 5696.591: 9.6123% ( 440) 00:07:34.375 5696.591 - 5721.797: 11.9709% ( 477) 00:07:34.375 5721.797 - 5747.003: 14.4630% ( 504) 00:07:34.375 5747.003 - 5772.209: 16.9057% ( 494) 00:07:34.375 5772.209 - 5797.415: 19.5263% ( 530) 00:07:34.375 5797.415 - 5822.622: 22.2162% ( 544) 00:07:34.375 5822.622 - 5847.828: 24.8368% ( 530) 00:07:34.375 5847.828 - 5873.034: 27.6355% ( 566) 00:07:34.375 5873.034 - 5898.240: 30.3946% ( 558) 00:07:34.375 5898.240 - 5923.446: 33.1883% ( 565) 00:07:34.375 5923.446 - 5948.652: 35.8979% ( 548) 00:07:34.375 5948.652 - 5973.858: 38.6620% ( 559) 00:07:34.375 5973.858 - 5999.065: 41.4903% ( 572) 00:07:34.375 5999.065 - 6024.271: 44.1752% ( 543) 00:07:34.375 6024.271 - 6049.477: 46.9343% ( 558) 00:07:34.375 6049.477 - 6074.683: 49.7280% ( 565) 00:07:34.375 6074.683 - 6099.889: 52.5119% ( 563) 00:07:34.375 6099.889 - 6125.095: 55.2561% ( 555) 00:07:34.375 6125.095 - 6150.302: 57.9905% ( 553) 00:07:34.375 6150.302 - 6175.508: 60.7150% ( 551) 00:07:34.375 6175.508 - 6200.714: 63.4691% ( 557) 00:07:34.375 6200.714 - 6225.920: 66.2381% ( 560) 00:07:34.375 6225.920 - 6251.126: 68.9577% ( 550) 00:07:34.375 6251.126 - 6276.332: 71.6772% ( 550) 00:07:34.375 6276.332 - 6301.538: 74.4709% ( 565) 00:07:34.375 6301.538 - 6326.745: 77.2300% ( 558) 00:07:34.375 6326.745 - 6351.951: 79.9051% ( 541) 00:07:34.375 6351.951 - 6377.157: 82.3922% ( 503) 00:07:34.375 6377.157 - 6402.363: 84.5580% ( 438) 00:07:34.375 6402.363 - 6427.569: 86.2589% ( 344) 00:07:34.375 6427.569 - 6452.775: 87.4654% ( 244) 00:07:34.375 6452.775 - 6503.188: 89.1021% ( 331) 00:07:34.375 6503.188 - 6553.600: 90.0910% ( 200) 00:07:34.375 6553.600 - 6604.012: 90.7536% ( 134) 00:07:34.375 6604.012 - 6654.425: 91.2184% ( 94) 00:07:34.375 6654.425 - 6704.837: 91.5892% ( 75) 00:07:34.375 6704.837 - 6755.249: 91.8612% ( 55) 00:07:34.375 6755.249 - 6805.662: 92.0540% ( 39) 00:07:34.375 6805.662 - 6856.074: 92.2271% ( 35) 00:07:34.375 6856.074 - 6906.486: 92.3705% ( 29) 00:07:34.375 6906.486 - 6956.898: 92.5583% ( 38) 00:07:34.375 6956.898 - 7007.311: 92.7067% ( 30) 00:07:34.375 7007.311 - 7057.723: 92.8847% ( 36) 00:07:34.375 7057.723 - 7108.135: 93.0578% ( 35) 00:07:34.375 7108.135 - 7158.548: 93.2456% ( 38) 00:07:34.375 7158.548 - 7208.960: 93.4434% ( 40) 00:07:34.375 7208.960 - 7259.372: 93.6116% ( 34) 00:07:34.375 7259.372 - 7309.785: 93.7797% ( 34) 00:07:34.375 7309.785 - 7360.197: 93.9280% ( 30) 00:07:34.375 7360.197 - 7410.609: 94.0912% ( 33) 00:07:34.375 7410.609 - 7461.022: 94.2395% ( 30) 00:07:34.375 7461.022 - 7511.434: 94.3780% ( 28) 00:07:34.375 7511.434 - 7561.846: 94.5263% ( 30) 00:07:34.375 7561.846 - 7612.258: 94.6697% ( 29) 00:07:34.375 7612.258 - 7662.671: 94.8230% ( 31) 00:07:34.375 7662.671 - 7713.083: 94.9664% ( 29) 00:07:34.375 7713.083 - 7763.495: 95.1048% ( 28) 00:07:34.375 7763.495 - 7813.908: 95.2631% ( 32) 00:07:34.375 7813.908 - 7864.320: 95.3966% ( 27) 00:07:34.375 7864.320 - 7914.732: 95.5548% ( 32) 00:07:34.375 7914.732 - 7965.145: 95.7180% ( 33) 00:07:34.375 7965.145 - 8015.557: 95.8267% ( 22) 00:07:34.375 8015.557 - 8065.969: 95.9207% ( 19) 00:07:34.375 8065.969 - 8116.382: 95.9998% ( 16) 00:07:34.375 8116.382 - 8166.794: 96.0740% ( 15) 00:07:34.375 8166.794 - 8217.206: 96.1333% ( 12) 00:07:34.375 8217.206 - 8267.618: 96.1976% ( 13) 00:07:34.375 8267.618 - 8318.031: 96.2520% ( 11) 00:07:34.375 8318.031 - 8368.443: 96.3064% ( 11) 00:07:34.375 8368.443 - 8418.855: 96.3558% ( 10) 00:07:34.375 8418.855 - 8469.268: 96.4003% ( 9) 00:07:34.375 8469.268 - 8519.680: 96.4597% ( 12) 00:07:34.375 8519.680 - 8570.092: 96.5190% ( 12) 00:07:34.375 8570.092 - 8620.505: 96.5734% ( 11) 00:07:34.375 8620.505 - 8670.917: 96.6278% ( 11) 00:07:34.375 8670.917 - 8721.329: 96.6772% ( 10) 00:07:34.375 8721.329 - 8771.742: 96.7118% ( 7) 00:07:34.375 8771.742 - 8822.154: 96.7415% ( 6) 00:07:34.375 8822.154 - 8872.566: 96.7514% ( 2) 00:07:34.375 8872.566 - 8922.978: 96.7811% ( 6) 00:07:34.375 8922.978 - 8973.391: 96.8157% ( 7) 00:07:34.375 8973.391 - 9023.803: 96.8701% ( 11) 00:07:34.375 9023.803 - 9074.215: 96.9195% ( 10) 00:07:34.375 9074.215 - 9124.628: 96.9689% ( 10) 00:07:34.375 9124.628 - 9175.040: 97.0184% ( 10) 00:07:34.375 9175.040 - 9225.452: 97.0876% ( 14) 00:07:34.375 9225.452 - 9275.865: 97.1717% ( 17) 00:07:34.375 9275.865 - 9326.277: 97.2607% ( 18) 00:07:34.375 9326.277 - 9376.689: 97.3398% ( 16) 00:07:34.375 9376.689 - 9427.102: 97.4140% ( 15) 00:07:34.375 9427.102 - 9477.514: 97.4881% ( 15) 00:07:34.375 9477.514 - 9527.926: 97.5623% ( 15) 00:07:34.375 9527.926 - 9578.338: 97.6414% ( 16) 00:07:34.375 9578.338 - 9628.751: 97.7205% ( 16) 00:07:34.375 9628.751 - 9679.163: 97.7898% ( 14) 00:07:34.375 9679.163 - 9729.575: 97.8590% ( 14) 00:07:34.375 9729.575 - 9779.988: 97.9183% ( 12) 00:07:34.375 9779.988 - 9830.400: 97.9628% ( 9) 00:07:34.375 9830.400 - 9880.812: 98.0222% ( 12) 00:07:34.375 9880.812 - 9931.225: 98.0765% ( 11) 00:07:34.375 9931.225 - 9981.637: 98.1260% ( 10) 00:07:34.375 9981.637 - 10032.049: 98.1655% ( 8) 00:07:34.375 10032.049 - 10082.462: 98.2002% ( 7) 00:07:34.375 10082.462 - 10132.874: 98.2150% ( 3) 00:07:34.375 10132.874 - 10183.286: 98.2496% ( 7) 00:07:34.375 10183.286 - 10233.698: 98.2644% ( 3) 00:07:34.375 10233.698 - 10284.111: 98.2743% ( 2) 00:07:34.375 10284.111 - 10334.523: 98.2892% ( 3) 00:07:34.375 10334.523 - 10384.935: 98.3089% ( 4) 00:07:34.375 10384.935 - 10435.348: 98.3337% ( 5) 00:07:34.375 10435.348 - 10485.760: 98.3633% ( 6) 00:07:34.375 10485.760 - 10536.172: 98.4029% ( 8) 00:07:34.375 10536.172 - 10586.585: 98.4227% ( 4) 00:07:34.375 10586.585 - 10636.997: 98.4523% ( 6) 00:07:34.375 10636.997 - 10687.409: 98.4919% ( 8) 00:07:34.375 10687.409 - 10737.822: 98.5413% ( 10) 00:07:34.375 10737.822 - 10788.234: 98.5957% ( 11) 00:07:34.375 10788.234 - 10838.646: 98.6452% ( 10) 00:07:34.375 10838.646 - 10889.058: 98.6946% ( 10) 00:07:34.375 10889.058 - 10939.471: 98.7441% ( 10) 00:07:34.375 10939.471 - 10989.883: 98.7985% ( 11) 00:07:34.375 10989.883 - 11040.295: 98.8528% ( 11) 00:07:34.375 11040.295 - 11090.708: 98.9122% ( 12) 00:07:34.375 11090.708 - 11141.120: 98.9715% ( 12) 00:07:34.375 11141.120 - 11191.532: 99.0309% ( 12) 00:07:34.375 11191.532 - 11241.945: 99.0803% ( 10) 00:07:34.375 11241.945 - 11292.357: 99.1199% ( 8) 00:07:34.375 11292.357 - 11342.769: 99.1594% ( 8) 00:07:34.375 11342.769 - 11393.182: 99.1990% ( 8) 00:07:34.375 11393.182 - 11443.594: 99.2336% ( 7) 00:07:34.375 11443.594 - 11494.006: 99.2633% ( 6) 00:07:34.375 11494.006 - 11544.418: 99.2929% ( 6) 00:07:34.375 11544.418 - 11594.831: 99.3176% ( 5) 00:07:34.375 11594.831 - 11645.243: 99.3424% ( 5) 00:07:34.375 11645.243 - 11695.655: 99.3523% ( 2) 00:07:34.375 11695.655 - 11746.068: 99.3621% ( 2) 00:07:34.375 11746.068 - 11796.480: 99.3671% ( 1) 00:07:34.375 20366.572 - 20467.397: 99.3869% ( 4) 00:07:34.375 20467.397 - 20568.222: 99.4066% ( 4) 00:07:34.375 20568.222 - 20669.046: 99.4215% ( 3) 00:07:34.375 20669.046 - 20769.871: 99.4413% ( 4) 00:07:34.375 20769.871 - 20870.695: 99.4610% ( 4) 00:07:34.375 20870.695 - 20971.520: 99.4759% ( 3) 00:07:34.375 20971.520 - 21072.345: 99.4956% ( 4) 00:07:34.375 21072.345 - 21173.169: 99.5105% ( 3) 00:07:34.375 21173.169 - 21273.994: 99.5352% ( 5) 00:07:34.375 21273.994 - 21374.818: 99.5550% ( 4) 00:07:34.375 21374.818 - 21475.643: 99.5748% ( 4) 00:07:34.375 21475.643 - 21576.468: 99.5896% ( 3) 00:07:34.375 21576.468 - 21677.292: 99.6094% ( 4) 00:07:34.375 21677.292 - 21778.117: 99.6292% ( 4) 00:07:34.376 21778.117 - 21878.942: 99.6489% ( 4) 00:07:34.376 21878.942 - 21979.766: 99.6687% ( 4) 00:07:34.376 21979.766 - 22080.591: 99.6835% ( 3) 00:07:34.376 25105.329 - 25206.154: 99.6934% ( 2) 00:07:34.376 25206.154 - 25306.978: 99.7083% ( 3) 00:07:34.376 25306.978 - 25407.803: 99.7280% ( 4) 00:07:34.376 25407.803 - 25508.628: 99.7478% ( 4) 00:07:34.376 25508.628 - 25609.452: 99.7627% ( 3) 00:07:34.376 25609.452 - 25710.277: 99.7824% ( 4) 00:07:34.376 25710.277 - 25811.102: 99.8072% ( 5) 00:07:34.376 25811.102 - 26012.751: 99.8418% ( 7) 00:07:34.376 26012.751 - 26214.400: 99.8813% ( 8) 00:07:34.376 26214.400 - 26416.049: 99.9209% ( 8) 00:07:34.376 26416.049 - 26617.698: 99.9604% ( 8) 00:07:34.376 26617.698 - 26819.348: 99.9951% ( 7) 00:07:34.376 26819.348 - 27020.997: 100.0000% ( 1) 00:07:34.376 00:07:34.376 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:34.376 ============================================================================== 00:07:34.376 Range in us Cumulative IO count 00:07:34.376 5469.735 - 5494.942: 0.0049% ( 1) 00:07:34.376 5494.942 - 5520.148: 0.0692% ( 13) 00:07:34.376 5520.148 - 5545.354: 0.2324% ( 33) 00:07:34.376 5545.354 - 5570.560: 0.6725% ( 89) 00:07:34.376 5570.560 - 5595.766: 1.7553% ( 219) 00:07:34.376 5595.766 - 5620.972: 3.5651% ( 366) 00:07:34.376 5620.972 - 5646.178: 5.4589% ( 383) 00:07:34.376 5646.178 - 5671.385: 7.5603% ( 425) 00:07:34.376 5671.385 - 5696.591: 9.6915% ( 431) 00:07:34.376 5696.591 - 5721.797: 11.9561% ( 458) 00:07:34.376 5721.797 - 5747.003: 14.4630% ( 507) 00:07:34.376 5747.003 - 5772.209: 17.0392% ( 521) 00:07:34.376 5772.209 - 5797.415: 19.6104% ( 520) 00:07:34.376 5797.415 - 5822.622: 22.4337% ( 571) 00:07:34.376 5822.622 - 5847.828: 25.0099% ( 521) 00:07:34.376 5847.828 - 5873.034: 27.6998% ( 544) 00:07:34.376 5873.034 - 5898.240: 30.3995% ( 546) 00:07:34.376 5898.240 - 5923.446: 33.2130% ( 569) 00:07:34.376 5923.446 - 5948.652: 36.0611% ( 576) 00:07:34.376 5948.652 - 5973.858: 38.8894% ( 572) 00:07:34.376 5973.858 - 5999.065: 41.5991% ( 548) 00:07:34.376 5999.065 - 6024.271: 44.3384% ( 554) 00:07:34.376 6024.271 - 6049.477: 47.0530% ( 549) 00:07:34.376 6049.477 - 6074.683: 49.7725% ( 550) 00:07:34.376 6074.683 - 6099.889: 52.4773% ( 547) 00:07:34.376 6099.889 - 6125.095: 55.1572% ( 542) 00:07:34.376 6125.095 - 6150.302: 57.8867% ( 552) 00:07:34.376 6150.302 - 6175.508: 60.6606% ( 561) 00:07:34.376 6175.508 - 6200.714: 63.4494% ( 564) 00:07:34.376 6200.714 - 6225.920: 66.1392% ( 544) 00:07:34.376 6225.920 - 6251.126: 68.8736% ( 553) 00:07:34.376 6251.126 - 6276.332: 71.6327% ( 558) 00:07:34.376 6276.332 - 6301.538: 74.3968% ( 559) 00:07:34.376 6301.538 - 6326.745: 77.1410% ( 555) 00:07:34.376 6326.745 - 6351.951: 79.8111% ( 540) 00:07:34.376 6351.951 - 6377.157: 82.3428% ( 512) 00:07:34.376 6377.157 - 6402.363: 84.5431% ( 445) 00:07:34.376 6402.363 - 6427.569: 86.3083% ( 357) 00:07:34.376 6427.569 - 6452.775: 87.5247% ( 246) 00:07:34.376 6452.775 - 6503.188: 89.1663% ( 332) 00:07:34.376 6503.188 - 6553.600: 90.2047% ( 210) 00:07:34.376 6553.600 - 6604.012: 90.8871% ( 138) 00:07:34.376 6604.012 - 6654.425: 91.4013% ( 104) 00:07:34.376 6654.425 - 6704.837: 91.7573% ( 72) 00:07:34.376 6704.837 - 6755.249: 91.9897% ( 47) 00:07:34.376 6755.249 - 6805.662: 92.1529% ( 33) 00:07:34.376 6805.662 - 6856.074: 92.2666% ( 23) 00:07:34.376 6856.074 - 6906.486: 92.4001% ( 27) 00:07:34.376 6906.486 - 6956.898: 92.5485% ( 30) 00:07:34.376 6956.898 - 7007.311: 92.6919% ( 29) 00:07:34.376 7007.311 - 7057.723: 92.8600% ( 34) 00:07:34.376 7057.723 - 7108.135: 93.0479% ( 38) 00:07:34.376 7108.135 - 7158.548: 93.2358% ( 38) 00:07:34.376 7158.548 - 7208.960: 93.4138% ( 36) 00:07:34.376 7208.960 - 7259.372: 93.5769% ( 33) 00:07:34.376 7259.372 - 7309.785: 93.7203% ( 29) 00:07:34.376 7309.785 - 7360.197: 93.9082% ( 38) 00:07:34.376 7360.197 - 7410.609: 94.0912% ( 37) 00:07:34.376 7410.609 - 7461.022: 94.2544% ( 33) 00:07:34.376 7461.022 - 7511.434: 94.4126% ( 32) 00:07:34.376 7511.434 - 7561.846: 94.5659% ( 31) 00:07:34.376 7561.846 - 7612.258: 94.6994% ( 27) 00:07:34.376 7612.258 - 7662.671: 94.8279% ( 26) 00:07:34.376 7662.671 - 7713.083: 94.9664% ( 28) 00:07:34.376 7713.083 - 7763.495: 95.1197% ( 31) 00:07:34.376 7763.495 - 7813.908: 95.2631% ( 29) 00:07:34.376 7813.908 - 7864.320: 95.4015% ( 28) 00:07:34.376 7864.320 - 7914.732: 95.5004% ( 20) 00:07:34.376 7914.732 - 7965.145: 95.6141% ( 23) 00:07:34.376 7965.145 - 8015.557: 95.7081% ( 19) 00:07:34.376 8015.557 - 8065.969: 95.7872% ( 16) 00:07:34.376 8065.969 - 8116.382: 95.8515% ( 13) 00:07:34.376 8116.382 - 8166.794: 95.9157% ( 13) 00:07:34.376 8166.794 - 8217.206: 96.0097% ( 19) 00:07:34.376 8217.206 - 8267.618: 96.0888% ( 16) 00:07:34.376 8267.618 - 8318.031: 96.1580% ( 14) 00:07:34.376 8318.031 - 8368.443: 96.2371% ( 16) 00:07:34.376 8368.443 - 8418.855: 96.3212% ( 17) 00:07:34.376 8418.855 - 8469.268: 96.4053% ( 17) 00:07:34.376 8469.268 - 8519.680: 96.4695% ( 13) 00:07:34.376 8519.680 - 8570.092: 96.5338% ( 13) 00:07:34.376 8570.092 - 8620.505: 96.5833% ( 10) 00:07:34.376 8620.505 - 8670.917: 96.6327% ( 10) 00:07:34.376 8670.917 - 8721.329: 96.6723% ( 8) 00:07:34.376 8721.329 - 8771.742: 96.7118% ( 8) 00:07:34.376 8771.742 - 8822.154: 96.7415% ( 6) 00:07:34.376 8822.154 - 8872.566: 96.7811% ( 8) 00:07:34.376 8872.566 - 8922.978: 96.8157% ( 7) 00:07:34.376 8922.978 - 8973.391: 96.8552% ( 8) 00:07:34.376 8973.391 - 9023.803: 96.8799% ( 5) 00:07:34.376 9023.803 - 9074.215: 96.8997% ( 4) 00:07:34.376 9074.215 - 9124.628: 96.9195% ( 4) 00:07:34.376 9124.628 - 9175.040: 96.9442% ( 5) 00:07:34.376 9175.040 - 9225.452: 96.9739% ( 6) 00:07:34.376 9225.452 - 9275.865: 97.0184% ( 9) 00:07:34.376 9275.865 - 9326.277: 97.0777% ( 12) 00:07:34.376 9326.277 - 9376.689: 97.1470% ( 14) 00:07:34.376 9376.689 - 9427.102: 97.2310% ( 17) 00:07:34.376 9427.102 - 9477.514: 97.3101% ( 16) 00:07:34.376 9477.514 - 9527.926: 97.3892% ( 16) 00:07:34.376 9527.926 - 9578.338: 97.4980% ( 22) 00:07:34.376 9578.338 - 9628.751: 97.5870% ( 18) 00:07:34.376 9628.751 - 9679.163: 97.6661% ( 16) 00:07:34.376 9679.163 - 9729.575: 97.7502% ( 17) 00:07:34.376 9729.575 - 9779.988: 97.8194% ( 14) 00:07:34.376 9779.988 - 9830.400: 97.8837% ( 13) 00:07:34.376 9830.400 - 9880.812: 97.9579% ( 15) 00:07:34.376 9880.812 - 9931.225: 98.0271% ( 14) 00:07:34.376 9931.225 - 9981.637: 98.0914% ( 13) 00:07:34.376 9981.637 - 10032.049: 98.1507% ( 12) 00:07:34.376 10032.049 - 10082.462: 98.2002% ( 10) 00:07:34.376 10082.462 - 10132.874: 98.2545% ( 11) 00:07:34.376 10132.874 - 10183.286: 98.2694% ( 3) 00:07:34.377 10183.286 - 10233.698: 98.2793% ( 2) 00:07:34.377 10233.698 - 10284.111: 98.2941% ( 3) 00:07:34.377 10284.111 - 10334.523: 98.3040% ( 2) 00:07:34.377 10334.523 - 10384.935: 98.3139% ( 2) 00:07:34.377 10384.935 - 10435.348: 98.3287% ( 3) 00:07:34.377 10435.348 - 10485.760: 98.3584% ( 6) 00:07:34.377 10485.760 - 10536.172: 98.3930% ( 7) 00:07:34.377 10536.172 - 10586.585: 98.4474% ( 11) 00:07:34.377 10586.585 - 10636.997: 98.4771% ( 6) 00:07:34.377 10636.997 - 10687.409: 98.5166% ( 8) 00:07:34.377 10687.409 - 10737.822: 98.5512% ( 7) 00:07:34.377 10737.822 - 10788.234: 98.5957% ( 9) 00:07:34.377 10788.234 - 10838.646: 98.6452% ( 10) 00:07:34.377 10838.646 - 10889.058: 98.7045% ( 12) 00:07:34.377 10889.058 - 10939.471: 98.7490% ( 9) 00:07:34.377 10939.471 - 10989.883: 98.7985% ( 10) 00:07:34.377 10989.883 - 11040.295: 98.8528% ( 11) 00:07:34.377 11040.295 - 11090.708: 98.8973% ( 9) 00:07:34.377 11090.708 - 11141.120: 98.9468% ( 10) 00:07:34.377 11141.120 - 11191.532: 98.9913% ( 9) 00:07:34.377 11191.532 - 11241.945: 99.0407% ( 10) 00:07:34.377 11241.945 - 11292.357: 99.0951% ( 11) 00:07:34.377 11292.357 - 11342.769: 99.1347% ( 8) 00:07:34.377 11342.769 - 11393.182: 99.1644% ( 6) 00:07:34.377 11393.182 - 11443.594: 99.1990% ( 7) 00:07:34.377 11443.594 - 11494.006: 99.2286% ( 6) 00:07:34.377 11494.006 - 11544.418: 99.2583% ( 6) 00:07:34.377 11544.418 - 11594.831: 99.2830% ( 5) 00:07:34.377 11594.831 - 11645.243: 99.3078% ( 5) 00:07:34.377 11645.243 - 11695.655: 99.3275% ( 4) 00:07:34.377 11695.655 - 11746.068: 99.3374% ( 2) 00:07:34.377 11746.068 - 11796.480: 99.3473% ( 2) 00:07:34.377 11796.480 - 11846.892: 99.3572% ( 2) 00:07:34.377 11846.892 - 11897.305: 99.3671% ( 2) 00:07:34.377 18652.554 - 18753.378: 99.3869% ( 4) 00:07:34.377 18753.378 - 18854.203: 99.4017% ( 3) 00:07:34.377 18854.203 - 18955.028: 99.4215% ( 4) 00:07:34.377 18955.028 - 19055.852: 99.4413% ( 4) 00:07:34.377 19055.852 - 19156.677: 99.4610% ( 4) 00:07:34.377 19156.677 - 19257.502: 99.4858% ( 5) 00:07:34.377 19257.502 - 19358.326: 99.5006% ( 3) 00:07:34.377 19358.326 - 19459.151: 99.5204% ( 4) 00:07:34.377 19459.151 - 19559.975: 99.5402% ( 4) 00:07:34.377 19559.975 - 19660.800: 99.5599% ( 4) 00:07:34.377 19660.800 - 19761.625: 99.5797% ( 4) 00:07:34.377 19761.625 - 19862.449: 99.5995% ( 4) 00:07:34.377 19862.449 - 19963.274: 99.6242% ( 5) 00:07:34.377 19963.274 - 20064.098: 99.6440% ( 4) 00:07:34.377 20064.098 - 20164.923: 99.6588% ( 3) 00:07:34.377 20164.923 - 20265.748: 99.6835% ( 5) 00:07:34.377 23391.311 - 23492.135: 99.6984% ( 3) 00:07:34.377 23492.135 - 23592.960: 99.7182% ( 4) 00:07:34.377 23592.960 - 23693.785: 99.7379% ( 4) 00:07:34.377 23693.785 - 23794.609: 99.7528% ( 3) 00:07:34.377 23794.609 - 23895.434: 99.7725% ( 4) 00:07:34.377 23895.434 - 23996.258: 99.7923% ( 4) 00:07:34.377 23996.258 - 24097.083: 99.8072% ( 3) 00:07:34.377 24097.083 - 24197.908: 99.8269% ( 4) 00:07:34.377 24197.908 - 24298.732: 99.8467% ( 4) 00:07:34.377 24298.732 - 24399.557: 99.8665% ( 4) 00:07:34.377 24399.557 - 24500.382: 99.8813% ( 3) 00:07:34.377 24500.382 - 24601.206: 99.9011% ( 4) 00:07:34.377 24601.206 - 24702.031: 99.9209% ( 4) 00:07:34.377 24702.031 - 24802.855: 99.9407% ( 4) 00:07:34.377 24802.855 - 24903.680: 99.9604% ( 4) 00:07:34.377 24903.680 - 25004.505: 99.9802% ( 4) 00:07:34.377 25004.505 - 25105.329: 100.0000% ( 4) 00:07:34.377 00:07:34.377 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:34.377 ============================================================================== 00:07:34.377 Range in us Cumulative IO count 00:07:34.377 5469.735 - 5494.942: 0.0099% ( 2) 00:07:34.377 5494.942 - 5520.148: 0.0742% ( 13) 00:07:34.377 5520.148 - 5545.354: 0.2176% ( 29) 00:07:34.377 5545.354 - 5570.560: 0.6675% ( 91) 00:07:34.377 5570.560 - 5595.766: 1.7306% ( 215) 00:07:34.377 5595.766 - 5620.972: 3.2832% ( 314) 00:07:34.377 5620.972 - 5646.178: 5.3006% ( 408) 00:07:34.377 5646.178 - 5671.385: 7.5208% ( 449) 00:07:34.377 5671.385 - 5696.591: 9.8299% ( 467) 00:07:34.377 5696.591 - 5721.797: 12.2379% ( 487) 00:07:34.377 5721.797 - 5747.003: 14.6163% ( 481) 00:07:34.377 5747.003 - 5772.209: 17.0194% ( 486) 00:07:34.377 5772.209 - 5797.415: 19.5856% ( 519) 00:07:34.377 5797.415 - 5822.622: 22.2854% ( 546) 00:07:34.377 5822.622 - 5847.828: 24.9901% ( 547) 00:07:34.377 5847.828 - 5873.034: 27.7987% ( 568) 00:07:34.377 5873.034 - 5898.240: 30.5578% ( 558) 00:07:34.377 5898.240 - 5923.446: 33.3070% ( 556) 00:07:34.377 5923.446 - 5948.652: 36.1748% ( 580) 00:07:34.377 5948.652 - 5973.858: 38.9142% ( 554) 00:07:34.377 5973.858 - 5999.065: 41.6782% ( 559) 00:07:34.377 5999.065 - 6024.271: 44.3483% ( 540) 00:07:34.377 6024.271 - 6049.477: 47.1123% ( 559) 00:07:34.377 6049.477 - 6074.683: 49.7874% ( 541) 00:07:34.377 6074.683 - 6099.889: 52.5366% ( 556) 00:07:34.377 6099.889 - 6125.095: 55.2759% ( 554) 00:07:34.377 6125.095 - 6150.302: 58.0350% ( 558) 00:07:34.377 6150.302 - 6175.508: 60.7941% ( 558) 00:07:34.377 6175.508 - 6200.714: 63.5730% ( 562) 00:07:34.377 6200.714 - 6225.920: 66.3716% ( 566) 00:07:34.377 6225.920 - 6251.126: 69.1505% ( 562) 00:07:34.377 6251.126 - 6276.332: 71.9591% ( 568) 00:07:34.377 6276.332 - 6301.538: 74.6687% ( 548) 00:07:34.377 6301.538 - 6326.745: 77.4179% ( 556) 00:07:34.377 6326.745 - 6351.951: 80.0534% ( 533) 00:07:34.377 6351.951 - 6377.157: 82.5900% ( 513) 00:07:34.377 6377.157 - 6402.363: 84.7805% ( 443) 00:07:34.377 6402.363 - 6427.569: 86.5111% ( 350) 00:07:34.377 6427.569 - 6452.775: 87.7373% ( 248) 00:07:34.377 6452.775 - 6503.188: 89.4877% ( 354) 00:07:34.377 6503.188 - 6553.600: 90.5409% ( 213) 00:07:34.378 6553.600 - 6604.012: 91.2035% ( 134) 00:07:34.378 6604.012 - 6654.425: 91.6535% ( 91) 00:07:34.378 6654.425 - 6704.837: 91.9798% ( 66) 00:07:34.378 6704.837 - 6755.249: 92.1677% ( 38) 00:07:34.378 6755.249 - 6805.662: 92.3259% ( 32) 00:07:34.378 6805.662 - 6856.074: 92.4693% ( 29) 00:07:34.378 6856.074 - 6906.486: 92.6127% ( 29) 00:07:34.378 6906.486 - 6956.898: 92.7413% ( 26) 00:07:34.378 6956.898 - 7007.311: 92.8896% ( 30) 00:07:34.378 7007.311 - 7057.723: 93.0578% ( 34) 00:07:34.378 7057.723 - 7108.135: 93.2160% ( 32) 00:07:34.378 7108.135 - 7158.548: 93.3643% ( 30) 00:07:34.378 7158.548 - 7208.960: 93.4978% ( 27) 00:07:34.378 7208.960 - 7259.372: 93.6017% ( 21) 00:07:34.378 7259.372 - 7309.785: 93.7203% ( 24) 00:07:34.378 7309.785 - 7360.197: 93.8341% ( 23) 00:07:34.378 7360.197 - 7410.609: 93.9428% ( 22) 00:07:34.378 7410.609 - 7461.022: 94.0763% ( 27) 00:07:34.378 7461.022 - 7511.434: 94.2000% ( 25) 00:07:34.378 7511.434 - 7561.846: 94.2989% ( 20) 00:07:34.378 7561.846 - 7612.258: 94.4175% ( 24) 00:07:34.378 7612.258 - 7662.671: 94.5659% ( 30) 00:07:34.378 7662.671 - 7713.083: 94.7191% ( 31) 00:07:34.378 7713.083 - 7763.495: 94.8527% ( 27) 00:07:34.378 7763.495 - 7813.908: 94.9812% ( 26) 00:07:34.378 7813.908 - 7864.320: 95.1197% ( 28) 00:07:34.378 7864.320 - 7914.732: 95.2680% ( 30) 00:07:34.378 7914.732 - 7965.145: 95.4262% ( 32) 00:07:34.378 7965.145 - 8015.557: 95.5449% ( 24) 00:07:34.378 8015.557 - 8065.969: 95.6438% ( 20) 00:07:34.378 8065.969 - 8116.382: 95.7377% ( 19) 00:07:34.378 8116.382 - 8166.794: 95.8416% ( 21) 00:07:34.378 8166.794 - 8217.206: 95.9355% ( 19) 00:07:34.378 8217.206 - 8267.618: 96.0196% ( 17) 00:07:34.378 8267.618 - 8318.031: 96.0938% ( 15) 00:07:34.378 8318.031 - 8368.443: 96.1580% ( 13) 00:07:34.378 8368.443 - 8418.855: 96.2124% ( 11) 00:07:34.378 8418.855 - 8469.268: 96.2619% ( 10) 00:07:34.378 8469.268 - 8519.680: 96.3212% ( 12) 00:07:34.378 8519.680 - 8570.092: 96.3805% ( 12) 00:07:34.378 8570.092 - 8620.505: 96.4448% ( 13) 00:07:34.378 8620.505 - 8670.917: 96.5091% ( 13) 00:07:34.378 8670.917 - 8721.329: 96.5585% ( 10) 00:07:34.378 8721.329 - 8771.742: 96.6080% ( 10) 00:07:34.378 8771.742 - 8822.154: 96.6624% ( 11) 00:07:34.378 8822.154 - 8872.566: 96.7118% ( 10) 00:07:34.378 8872.566 - 8922.978: 96.7563% ( 9) 00:07:34.378 8922.978 - 8973.391: 96.8157% ( 12) 00:07:34.378 8973.391 - 9023.803: 96.8898% ( 15) 00:07:34.378 9023.803 - 9074.215: 96.9492% ( 12) 00:07:34.378 9074.215 - 9124.628: 97.0085% ( 12) 00:07:34.378 9124.628 - 9175.040: 97.0777% ( 14) 00:07:34.378 9175.040 - 9225.452: 97.1519% ( 15) 00:07:34.378 9225.452 - 9275.865: 97.2211% ( 14) 00:07:34.378 9275.865 - 9326.277: 97.2854% ( 13) 00:07:34.378 9326.277 - 9376.689: 97.3348% ( 10) 00:07:34.378 9376.689 - 9427.102: 97.3695% ( 7) 00:07:34.378 9427.102 - 9477.514: 97.3991% ( 6) 00:07:34.378 9477.514 - 9527.926: 97.4288% ( 6) 00:07:34.378 9527.926 - 9578.338: 97.4585% ( 6) 00:07:34.378 9578.338 - 9628.751: 97.4931% ( 7) 00:07:34.378 9628.751 - 9679.163: 97.5425% ( 10) 00:07:34.378 9679.163 - 9729.575: 97.5969% ( 11) 00:07:34.378 9729.575 - 9779.988: 97.6661% ( 14) 00:07:34.378 9779.988 - 9830.400: 97.7403% ( 15) 00:07:34.378 9830.400 - 9880.812: 97.8095% ( 14) 00:07:34.378 9880.812 - 9931.225: 97.8788% ( 14) 00:07:34.378 9931.225 - 9981.637: 97.9430% ( 13) 00:07:34.378 9981.637 - 10032.049: 98.0024% ( 12) 00:07:34.378 10032.049 - 10082.462: 98.0518% ( 10) 00:07:34.378 10082.462 - 10132.874: 98.1013% ( 10) 00:07:34.378 10132.874 - 10183.286: 98.1557% ( 11) 00:07:34.378 10183.286 - 10233.698: 98.2249% ( 14) 00:07:34.378 10233.698 - 10284.111: 98.2941% ( 14) 00:07:34.378 10284.111 - 10334.523: 98.3633% ( 14) 00:07:34.378 10334.523 - 10384.935: 98.4227% ( 12) 00:07:34.378 10384.935 - 10435.348: 98.4771% ( 11) 00:07:34.378 10435.348 - 10485.760: 98.5166% ( 8) 00:07:34.378 10485.760 - 10536.172: 98.5512% ( 7) 00:07:34.378 10536.172 - 10586.585: 98.5661% ( 3) 00:07:34.378 10586.585 - 10636.997: 98.5858% ( 4) 00:07:34.378 10636.997 - 10687.409: 98.6056% ( 4) 00:07:34.378 10687.409 - 10737.822: 98.6205% ( 3) 00:07:34.378 10737.822 - 10788.234: 98.6353% ( 3) 00:07:34.378 10788.234 - 10838.646: 98.6551% ( 4) 00:07:34.378 10838.646 - 10889.058: 98.6699% ( 3) 00:07:34.378 10889.058 - 10939.471: 98.6897% ( 4) 00:07:34.378 10939.471 - 10989.883: 98.7292% ( 8) 00:07:34.378 10989.883 - 11040.295: 98.7638% ( 7) 00:07:34.378 11040.295 - 11090.708: 98.7935% ( 6) 00:07:34.378 11090.708 - 11141.120: 98.8331% ( 8) 00:07:34.378 11141.120 - 11191.532: 98.8578% ( 5) 00:07:34.378 11191.532 - 11241.945: 98.8875% ( 6) 00:07:34.378 11241.945 - 11292.357: 98.9221% ( 7) 00:07:34.378 11292.357 - 11342.769: 98.9517% ( 6) 00:07:34.378 11342.769 - 11393.182: 98.9814% ( 6) 00:07:34.378 11393.182 - 11443.594: 99.0160% ( 7) 00:07:34.378 11443.594 - 11494.006: 99.0457% ( 6) 00:07:34.378 11494.006 - 11544.418: 99.0754% ( 6) 00:07:34.378 11544.418 - 11594.831: 99.1001% ( 5) 00:07:34.378 11594.831 - 11645.243: 99.1396% ( 8) 00:07:34.378 11645.243 - 11695.655: 99.1792% ( 8) 00:07:34.378 11695.655 - 11746.068: 99.2188% ( 8) 00:07:34.378 11746.068 - 11796.480: 99.2484% ( 6) 00:07:34.378 11796.480 - 11846.892: 99.2682% ( 4) 00:07:34.378 11846.892 - 11897.305: 99.2880% ( 4) 00:07:34.378 11897.305 - 11947.717: 99.3078% ( 4) 00:07:34.378 11947.717 - 11998.129: 99.3275% ( 4) 00:07:34.378 11998.129 - 12048.542: 99.3424% ( 3) 00:07:34.378 12048.542 - 12098.954: 99.3621% ( 4) 00:07:34.378 12098.954 - 12149.366: 99.3671% ( 1) 00:07:34.378 16938.535 - 17039.360: 99.3819% ( 3) 00:07:34.378 17039.360 - 17140.185: 99.4066% ( 5) 00:07:34.378 17140.185 - 17241.009: 99.4264% ( 4) 00:07:34.378 17241.009 - 17341.834: 99.4462% ( 4) 00:07:34.378 17341.834 - 17442.658: 99.4660% ( 4) 00:07:34.378 17442.658 - 17543.483: 99.4858% ( 4) 00:07:34.378 17543.483 - 17644.308: 99.5055% ( 4) 00:07:34.378 17644.308 - 17745.132: 99.5253% ( 4) 00:07:34.378 17745.132 - 17845.957: 99.5451% ( 4) 00:07:34.378 17845.957 - 17946.782: 99.5649% ( 4) 00:07:34.378 17946.782 - 18047.606: 99.5797% ( 3) 00:07:34.378 18047.606 - 18148.431: 99.5995% ( 4) 00:07:34.378 18148.431 - 18249.255: 99.6193% ( 4) 00:07:34.378 18249.255 - 18350.080: 99.6390% ( 4) 00:07:34.378 18350.080 - 18450.905: 99.6588% ( 4) 00:07:34.378 18450.905 - 18551.729: 99.6786% ( 4) 00:07:34.378 18551.729 - 18652.554: 99.6835% ( 1) 00:07:34.378 21677.292 - 21778.117: 99.6885% ( 1) 00:07:34.378 21778.117 - 21878.942: 99.7033% ( 3) 00:07:34.378 21878.942 - 21979.766: 99.7231% ( 4) 00:07:34.378 21979.766 - 22080.591: 99.7429% ( 4) 00:07:34.378 22080.591 - 22181.415: 99.7627% ( 4) 00:07:34.378 22181.415 - 22282.240: 99.7824% ( 4) 00:07:34.378 22282.240 - 22383.065: 99.8022% ( 4) 00:07:34.378 22383.065 - 22483.889: 99.8220% ( 4) 00:07:34.378 22483.889 - 22584.714: 99.8368% ( 3) 00:07:34.378 22584.714 - 22685.538: 99.8566% ( 4) 00:07:34.378 22685.538 - 22786.363: 99.8764% ( 4) 00:07:34.378 22786.363 - 22887.188: 99.8962% ( 4) 00:07:34.378 22887.188 - 22988.012: 99.9159% ( 4) 00:07:34.378 22988.012 - 23088.837: 99.9357% ( 4) 00:07:34.378 23088.837 - 23189.662: 99.9555% ( 4) 00:07:34.378 23189.662 - 23290.486: 99.9753% ( 4) 00:07:34.378 23290.486 - 23391.311: 99.9951% ( 4) 00:07:34.378 23391.311 - 23492.135: 100.0000% ( 1) 00:07:34.378 00:07:34.378 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:34.378 ============================================================================== 00:07:34.378 Range in us Cumulative IO count 00:07:34.378 5494.942 - 5520.148: 0.0346% ( 7) 00:07:34.378 5520.148 - 5545.354: 0.2126% ( 36) 00:07:34.378 5545.354 - 5570.560: 0.8406% ( 127) 00:07:34.378 5570.560 - 5595.766: 1.8839% ( 211) 00:07:34.378 5595.766 - 5620.972: 3.5107% ( 329) 00:07:34.378 5620.972 - 5646.178: 5.3402% ( 370) 00:07:34.378 5646.178 - 5671.385: 7.5603% ( 449) 00:07:34.378 5671.385 - 5696.591: 9.7607% ( 445) 00:07:34.378 5696.591 - 5721.797: 12.0748% ( 468) 00:07:34.378 5721.797 - 5747.003: 14.4778% ( 486) 00:07:34.378 5747.003 - 5772.209: 17.0095% ( 512) 00:07:34.378 5772.209 - 5797.415: 19.7290% ( 550) 00:07:34.378 5797.415 - 5822.622: 22.4684% ( 554) 00:07:34.378 5822.622 - 5847.828: 25.1236% ( 537) 00:07:34.378 5847.828 - 5873.034: 27.9223% ( 566) 00:07:34.378 5873.034 - 5898.240: 30.6566% ( 553) 00:07:34.378 5898.240 - 5923.446: 33.4157% ( 558) 00:07:34.378 5923.446 - 5948.652: 36.2787% ( 579) 00:07:34.378 5948.652 - 5973.858: 39.0229% ( 555) 00:07:34.378 5973.858 - 5999.065: 41.7277% ( 547) 00:07:34.378 5999.065 - 6024.271: 44.3977% ( 540) 00:07:34.378 6024.271 - 6049.477: 47.1816% ( 563) 00:07:34.378 6049.477 - 6074.683: 49.9506% ( 560) 00:07:34.378 6074.683 - 6099.889: 52.7047% ( 557) 00:07:34.378 6099.889 - 6125.095: 55.4836% ( 562) 00:07:34.378 6125.095 - 6150.302: 58.2526% ( 560) 00:07:34.378 6150.302 - 6175.508: 61.0364% ( 563) 00:07:34.378 6175.508 - 6200.714: 63.7955% ( 558) 00:07:34.378 6200.714 - 6225.920: 66.5892% ( 565) 00:07:34.378 6225.920 - 6251.126: 69.3384% ( 556) 00:07:34.378 6251.126 - 6276.332: 72.1074% ( 560) 00:07:34.378 6276.332 - 6301.538: 74.8566% ( 556) 00:07:34.378 6301.538 - 6326.745: 77.5959% ( 554) 00:07:34.378 6326.745 - 6351.951: 80.2660% ( 540) 00:07:34.378 6351.951 - 6377.157: 82.8372% ( 520) 00:07:34.378 6377.157 - 6402.363: 84.9684% ( 431) 00:07:34.378 6402.363 - 6427.569: 86.6990% ( 350) 00:07:34.378 6427.569 - 6452.775: 87.9648% ( 256) 00:07:34.378 6452.775 - 6503.188: 89.6212% ( 335) 00:07:34.378 6503.188 - 6553.600: 90.6201% ( 202) 00:07:34.378 6553.600 - 6604.012: 91.2678% ( 131) 00:07:34.378 6604.012 - 6654.425: 91.7474% ( 97) 00:07:34.378 6654.425 - 6704.837: 92.0688% ( 65) 00:07:34.378 6704.837 - 6755.249: 92.2716% ( 41) 00:07:34.379 6755.249 - 6805.662: 92.4001% ( 26) 00:07:34.379 6805.662 - 6856.074: 92.5237% ( 25) 00:07:34.379 6856.074 - 6906.486: 92.6523% ( 26) 00:07:34.379 6906.486 - 6956.898: 92.7611% ( 22) 00:07:34.379 6956.898 - 7007.311: 92.8946% ( 27) 00:07:34.379 7007.311 - 7057.723: 93.0182% ( 25) 00:07:34.379 7057.723 - 7108.135: 93.1319% ( 23) 00:07:34.379 7108.135 - 7158.548: 93.2456% ( 23) 00:07:34.379 7158.548 - 7208.960: 93.4039% ( 32) 00:07:34.379 7208.960 - 7259.372: 93.5621% ( 32) 00:07:34.379 7259.372 - 7309.785: 93.7055% ( 29) 00:07:34.379 7309.785 - 7360.197: 93.8439% ( 28) 00:07:34.379 7360.197 - 7410.609: 93.9824% ( 28) 00:07:34.379 7410.609 - 7461.022: 94.1159% ( 27) 00:07:34.379 7461.022 - 7511.434: 94.2395% ( 25) 00:07:34.379 7511.434 - 7561.846: 94.3730% ( 27) 00:07:34.379 7561.846 - 7612.258: 94.4917% ( 24) 00:07:34.379 7612.258 - 7662.671: 94.6104% ( 24) 00:07:34.379 7662.671 - 7713.083: 94.7290% ( 24) 00:07:34.379 7713.083 - 7763.495: 94.8329% ( 21) 00:07:34.379 7763.495 - 7813.908: 94.9466% ( 23) 00:07:34.379 7813.908 - 7864.320: 95.0653% ( 24) 00:07:34.379 7864.320 - 7914.732: 95.1988% ( 27) 00:07:34.379 7914.732 - 7965.145: 95.3521% ( 31) 00:07:34.379 7965.145 - 8015.557: 95.4460% ( 19) 00:07:34.379 8015.557 - 8065.969: 95.5202% ( 15) 00:07:34.379 8065.969 - 8116.382: 95.5746% ( 11) 00:07:34.379 8116.382 - 8166.794: 95.6290% ( 11) 00:07:34.379 8166.794 - 8217.206: 95.6833% ( 11) 00:07:34.379 8217.206 - 8267.618: 95.7575% ( 15) 00:07:34.379 8267.618 - 8318.031: 95.8317% ( 15) 00:07:34.379 8318.031 - 8368.443: 95.9059% ( 15) 00:07:34.379 8368.443 - 8418.855: 95.9751% ( 14) 00:07:34.379 8418.855 - 8469.268: 96.0690% ( 19) 00:07:34.379 8469.268 - 8519.680: 96.1679% ( 20) 00:07:34.379 8519.680 - 8570.092: 96.2569% ( 18) 00:07:34.379 8570.092 - 8620.505: 96.3509% ( 19) 00:07:34.379 8620.505 - 8670.917: 96.4152% ( 13) 00:07:34.379 8670.917 - 8721.329: 96.4794% ( 13) 00:07:34.379 8721.329 - 8771.742: 96.5536% ( 15) 00:07:34.379 8771.742 - 8822.154: 96.6278% ( 15) 00:07:34.379 8822.154 - 8872.566: 96.7019% ( 15) 00:07:34.379 8872.566 - 8922.978: 96.7860% ( 17) 00:07:34.379 8922.978 - 8973.391: 96.8799% ( 19) 00:07:34.379 8973.391 - 9023.803: 96.9541% ( 15) 00:07:34.379 9023.803 - 9074.215: 97.0134% ( 12) 00:07:34.379 9074.215 - 9124.628: 97.0827% ( 14) 00:07:34.379 9124.628 - 9175.040: 97.1420% ( 12) 00:07:34.379 9175.040 - 9225.452: 97.2063% ( 13) 00:07:34.379 9225.452 - 9275.865: 97.2656% ( 12) 00:07:34.379 9275.865 - 9326.277: 97.3250% ( 12) 00:07:34.379 9326.277 - 9376.689: 97.3843% ( 12) 00:07:34.379 9376.689 - 9427.102: 97.4436% ( 12) 00:07:34.379 9427.102 - 9477.514: 97.5030% ( 12) 00:07:34.379 9477.514 - 9527.926: 97.5672% ( 13) 00:07:34.379 9527.926 - 9578.338: 97.6266% ( 12) 00:07:34.379 9578.338 - 9628.751: 97.6711% ( 9) 00:07:34.379 9628.751 - 9679.163: 97.7106% ( 8) 00:07:34.379 9679.163 - 9729.575: 97.7650% ( 11) 00:07:34.379 9729.575 - 9779.988: 97.8095% ( 9) 00:07:34.379 9779.988 - 9830.400: 97.8293% ( 4) 00:07:34.379 9830.400 - 9880.812: 97.8441% ( 3) 00:07:34.379 9880.812 - 9931.225: 97.8639% ( 4) 00:07:34.379 9931.225 - 9981.637: 97.8788% ( 3) 00:07:34.379 9981.637 - 10032.049: 97.9035% ( 5) 00:07:34.379 10032.049 - 10082.462: 97.9381% ( 7) 00:07:34.379 10082.462 - 10132.874: 97.9777% ( 8) 00:07:34.379 10132.874 - 10183.286: 98.0419% ( 13) 00:07:34.379 10183.286 - 10233.698: 98.0963% ( 11) 00:07:34.379 10233.698 - 10284.111: 98.1408% ( 9) 00:07:34.379 10284.111 - 10334.523: 98.2002% ( 12) 00:07:34.379 10334.523 - 10384.935: 98.2545% ( 11) 00:07:34.379 10384.935 - 10435.348: 98.3089% ( 11) 00:07:34.379 10435.348 - 10485.760: 98.3683% ( 12) 00:07:34.379 10485.760 - 10536.172: 98.4227% ( 11) 00:07:34.379 10536.172 - 10586.585: 98.4820% ( 12) 00:07:34.379 10586.585 - 10636.997: 98.5364% ( 11) 00:07:34.379 10636.997 - 10687.409: 98.5759% ( 8) 00:07:34.379 10687.409 - 10737.822: 98.6106% ( 7) 00:07:34.379 10737.822 - 10788.234: 98.6501% ( 8) 00:07:34.379 10788.234 - 10838.646: 98.6847% ( 7) 00:07:34.379 10838.646 - 10889.058: 98.7045% ( 4) 00:07:34.379 10889.058 - 10939.471: 98.7243% ( 4) 00:07:34.379 10939.471 - 10989.883: 98.7342% ( 2) 00:07:34.379 11241.945 - 11292.357: 98.7589% ( 5) 00:07:34.379 11292.357 - 11342.769: 98.8182% ( 12) 00:07:34.379 11342.769 - 11393.182: 98.8479% ( 6) 00:07:34.379 11393.182 - 11443.594: 98.8776% ( 6) 00:07:34.379 11443.594 - 11494.006: 98.9221% ( 9) 00:07:34.379 11494.006 - 11544.418: 98.9567% ( 7) 00:07:34.379 11544.418 - 11594.831: 98.9962% ( 8) 00:07:34.379 11594.831 - 11645.243: 99.0358% ( 8) 00:07:34.379 11645.243 - 11695.655: 99.0754% ( 8) 00:07:34.379 11695.655 - 11746.068: 99.1100% ( 7) 00:07:34.379 11746.068 - 11796.480: 99.1495% ( 8) 00:07:34.379 11796.480 - 11846.892: 99.1841% ( 7) 00:07:34.379 11846.892 - 11897.305: 99.2237% ( 8) 00:07:34.379 11897.305 - 11947.717: 99.2583% ( 7) 00:07:34.379 11947.717 - 11998.129: 99.2979% ( 8) 00:07:34.379 11998.129 - 12048.542: 99.3325% ( 7) 00:07:34.379 12048.542 - 12098.954: 99.3621% ( 6) 00:07:34.379 12098.954 - 12149.366: 99.3671% ( 1) 00:07:34.379 15123.692 - 15224.517: 99.3819% ( 3) 00:07:34.379 15224.517 - 15325.342: 99.3968% ( 3) 00:07:34.379 15325.342 - 15426.166: 99.4165% ( 4) 00:07:34.379 15426.166 - 15526.991: 99.4413% ( 5) 00:07:34.379 15526.991 - 15627.815: 99.4610% ( 4) 00:07:34.379 15627.815 - 15728.640: 99.4759% ( 3) 00:07:34.379 15728.640 - 15829.465: 99.4956% ( 4) 00:07:34.379 15829.465 - 15930.289: 99.5154% ( 4) 00:07:34.379 15930.289 - 16031.114: 99.5352% ( 4) 00:07:34.379 16031.114 - 16131.938: 99.5550% ( 4) 00:07:34.379 16131.938 - 16232.763: 99.5748% ( 4) 00:07:34.379 16232.763 - 16333.588: 99.5945% ( 4) 00:07:34.379 16333.588 - 16434.412: 99.6193% ( 5) 00:07:34.379 16434.412 - 16535.237: 99.6341% ( 3) 00:07:34.379 16535.237 - 16636.062: 99.6539% ( 4) 00:07:34.379 16636.062 - 16736.886: 99.6737% ( 4) 00:07:34.379 16736.886 - 16837.711: 99.6835% ( 2) 00:07:34.379 19963.274 - 20064.098: 99.6885% ( 1) 00:07:34.379 20064.098 - 20164.923: 99.7083% ( 4) 00:07:34.379 20164.923 - 20265.748: 99.7280% ( 4) 00:07:34.379 20265.748 - 20366.572: 99.7478% ( 4) 00:07:34.379 20366.572 - 20467.397: 99.7577% ( 2) 00:07:34.379 20467.397 - 20568.222: 99.7775% ( 4) 00:07:34.379 20568.222 - 20669.046: 99.7973% ( 4) 00:07:34.379 20669.046 - 20769.871: 99.8170% ( 4) 00:07:34.379 20769.871 - 20870.695: 99.8368% ( 4) 00:07:34.379 20870.695 - 20971.520: 99.8566% ( 4) 00:07:34.379 20971.520 - 21072.345: 99.8764% ( 4) 00:07:34.379 21072.345 - 21173.169: 99.8912% ( 3) 00:07:34.379 21173.169 - 21273.994: 99.9110% ( 4) 00:07:34.379 21273.994 - 21374.818: 99.9308% ( 4) 00:07:34.379 21374.818 - 21475.643: 99.9506% ( 4) 00:07:34.379 21475.643 - 21576.468: 99.9654% ( 3) 00:07:34.379 21576.468 - 21677.292: 99.9852% ( 4) 00:07:34.379 21677.292 - 21778.117: 100.0000% ( 3) 00:07:34.379 00:07:34.379 12:19:41 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:35.755 Initializing NVMe Controllers 00:07:35.755 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:35.755 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:35.755 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:35.755 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:35.755 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:35.755 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:35.755 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:35.755 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:35.755 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:35.755 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:35.755 Initialization complete. Launching workers. 00:07:35.755 ======================================================== 00:07:35.755 Latency(us) 00:07:35.755 Device Information : IOPS MiB/s Average min max 00:07:35.755 PCIE (0000:00:10.0) NSID 1 from core 0: 16684.01 195.52 7680.90 5769.56 34700.76 00:07:35.755 PCIE (0000:00:11.0) NSID 1 from core 0: 16684.01 195.52 7668.94 5950.34 33013.90 00:07:35.755 PCIE (0000:00:13.0) NSID 1 from core 0: 16684.01 195.52 7657.01 5917.55 31911.54 00:07:35.755 PCIE (0000:00:12.0) NSID 1 from core 0: 16684.01 195.52 7644.90 5986.21 30324.95 00:07:35.755 PCIE (0000:00:12.0) NSID 2 from core 0: 16684.01 195.52 7632.75 5946.80 28713.73 00:07:35.755 PCIE (0000:00:12.0) NSID 3 from core 0: 16747.94 196.26 7591.47 5999.09 22595.75 00:07:35.755 ======================================================== 00:07:35.755 Total : 100168.00 1173.84 7645.96 5769.56 34700.76 00:07:35.755 00:07:35.755 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:35.755 ================================================================================= 00:07:35.755 1.00000% : 6150.302us 00:07:35.755 10.00000% : 6351.951us 00:07:35.755 25.00000% : 6654.425us 00:07:35.755 50.00000% : 7561.846us 00:07:35.755 75.00000% : 8217.206us 00:07:35.755 90.00000% : 8822.154us 00:07:35.755 95.00000% : 9326.277us 00:07:35.755 98.00000% : 10082.462us 00:07:35.755 99.00000% : 11040.295us 00:07:35.755 99.50000% : 28432.542us 00:07:35.755 99.90000% : 34280.369us 00:07:35.755 99.99000% : 34683.668us 00:07:35.755 99.99900% : 34885.317us 00:07:35.755 99.99990% : 34885.317us 00:07:35.755 99.99999% : 34885.317us 00:07:35.755 00:07:35.755 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:35.755 ================================================================================= 00:07:35.755 1.00000% : 6251.126us 00:07:35.755 10.00000% : 6452.775us 00:07:35.755 25.00000% : 6604.012us 00:07:35.755 50.00000% : 7713.083us 00:07:35.755 75.00000% : 8217.206us 00:07:35.755 90.00000% : 8670.917us 00:07:35.755 95.00000% : 9275.865us 00:07:35.755 98.00000% : 10334.523us 00:07:35.755 99.00000% : 10889.058us 00:07:35.755 99.50000% : 26617.698us 00:07:35.755 99.90000% : 32667.175us 00:07:35.755 99.99000% : 33070.474us 00:07:35.755 99.99900% : 33070.474us 00:07:35.755 99.99990% : 33070.474us 00:07:35.755 99.99999% : 33070.474us 00:07:35.755 00:07:35.755 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:35.755 ================================================================================= 00:07:35.755 1.00000% : 6225.920us 00:07:35.755 10.00000% : 6452.775us 00:07:35.755 25.00000% : 6604.012us 00:07:35.755 50.00000% : 7662.671us 00:07:35.755 75.00000% : 8217.206us 00:07:35.755 90.00000% : 8670.917us 00:07:35.755 95.00000% : 9225.452us 00:07:35.755 98.00000% : 10132.874us 00:07:35.755 99.00000% : 11241.945us 00:07:35.755 99.50000% : 25811.102us 00:07:35.755 99.90000% : 31658.929us 00:07:35.755 99.99000% : 32062.228us 00:07:35.755 99.99900% : 32062.228us 00:07:35.755 99.99990% : 32062.228us 00:07:35.755 99.99999% : 32062.228us 00:07:35.755 00:07:35.755 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:35.755 ================================================================================= 00:07:35.755 1.00000% : 6251.126us 00:07:35.755 10.00000% : 6452.775us 00:07:35.755 25.00000% : 6604.012us 00:07:35.755 50.00000% : 7662.671us 00:07:35.755 75.00000% : 8217.206us 00:07:35.755 90.00000% : 8670.917us 00:07:35.755 95.00000% : 9225.452us 00:07:35.755 98.00000% : 9981.637us 00:07:35.755 99.00000% : 11746.068us 00:07:35.755 99.50000% : 24298.732us 00:07:35.755 99.90000% : 30045.735us 00:07:35.755 99.99000% : 30449.034us 00:07:35.755 99.99900% : 30449.034us 00:07:35.755 99.99990% : 30449.034us 00:07:35.755 99.99999% : 30449.034us 00:07:35.755 00:07:35.755 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:35.755 ================================================================================= 00:07:35.755 1.00000% : 6251.126us 00:07:35.755 10.00000% : 6452.775us 00:07:35.755 25.00000% : 6604.012us 00:07:35.755 50.00000% : 7662.671us 00:07:35.755 75.00000% : 8166.794us 00:07:35.755 90.00000% : 8620.505us 00:07:35.755 95.00000% : 9326.277us 00:07:35.755 98.00000% : 9880.812us 00:07:35.755 99.00000% : 11796.480us 00:07:35.755 99.50000% : 22685.538us 00:07:35.755 99.90000% : 28432.542us 00:07:35.755 99.99000% : 28835.840us 00:07:35.755 99.99900% : 28835.840us 00:07:35.755 99.99990% : 28835.840us 00:07:35.755 99.99999% : 28835.840us 00:07:35.755 00:07:35.755 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:35.755 ================================================================================= 00:07:35.755 1.00000% : 6251.126us 00:07:35.755 10.00000% : 6452.775us 00:07:35.755 25.00000% : 6604.012us 00:07:35.755 50.00000% : 7662.671us 00:07:35.755 75.00000% : 8217.206us 00:07:35.755 90.00000% : 8670.917us 00:07:35.755 95.00000% : 9326.277us 00:07:35.755 98.00000% : 9981.637us 00:07:35.755 99.00000% : 11443.594us 00:07:35.755 99.50000% : 16636.062us 00:07:35.755 99.90000% : 22181.415us 00:07:35.755 99.99000% : 22584.714us 00:07:35.755 99.99900% : 22685.538us 00:07:35.755 99.99990% : 22685.538us 00:07:35.755 99.99999% : 22685.538us 00:07:35.755 00:07:35.755 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:35.755 ============================================================================== 00:07:35.755 Range in us Cumulative IO count 00:07:35.755 5747.003 - 5772.209: 0.0060% ( 1) 00:07:35.755 5772.209 - 5797.415: 0.0120% ( 1) 00:07:35.755 5797.415 - 5822.622: 0.0299% ( 3) 00:07:35.755 5822.622 - 5847.828: 0.0479% ( 3) 00:07:35.755 5847.828 - 5873.034: 0.0659% ( 3) 00:07:35.755 5873.034 - 5898.240: 0.0778% ( 2) 00:07:35.755 5898.240 - 5923.446: 0.1257% ( 8) 00:07:35.755 5923.446 - 5948.652: 0.1377% ( 2) 00:07:35.755 5948.652 - 5973.858: 0.1557% ( 3) 00:07:35.755 5973.858 - 5999.065: 0.1856% ( 5) 00:07:35.755 5999.065 - 6024.271: 0.2395% ( 9) 00:07:35.755 6024.271 - 6049.477: 0.3293% ( 15) 00:07:35.755 6049.477 - 6074.683: 0.4370% ( 18) 00:07:35.755 6074.683 - 6099.889: 0.6585% ( 37) 00:07:35.755 6099.889 - 6125.095: 0.9519% ( 49) 00:07:35.755 6125.095 - 6150.302: 1.5505% ( 100) 00:07:35.755 6150.302 - 6175.508: 2.2450% ( 116) 00:07:35.755 6175.508 - 6200.714: 2.9993% ( 126) 00:07:35.755 6200.714 - 6225.920: 3.7476% ( 125) 00:07:35.755 6225.920 - 6251.126: 4.7114% ( 161) 00:07:35.755 6251.126 - 6276.332: 5.9746% ( 211) 00:07:35.755 6276.332 - 6301.538: 7.8065% ( 306) 00:07:35.755 6301.538 - 6326.745: 9.7102% ( 318) 00:07:35.755 6326.745 - 6351.951: 11.7816% ( 346) 00:07:35.755 6351.951 - 6377.157: 13.6255% ( 308) 00:07:35.755 6377.157 - 6402.363: 15.1161% ( 249) 00:07:35.755 6402.363 - 6427.569: 16.6607% ( 258) 00:07:35.755 6427.569 - 6452.775: 17.9298% ( 212) 00:07:35.755 6452.775 - 6503.188: 20.4741% ( 425) 00:07:35.755 6503.188 - 6553.600: 22.5754% ( 351) 00:07:35.755 6553.600 - 6604.012: 24.8922% ( 387) 00:07:35.755 6604.012 - 6654.425: 27.5024% ( 436) 00:07:35.755 6654.425 - 6704.837: 29.7953% ( 383) 00:07:35.755 6704.837 - 6755.249: 31.9983% ( 368) 00:07:35.755 6755.249 - 6805.662: 34.0278% ( 339) 00:07:35.755 6805.662 - 6856.074: 35.7699% ( 291) 00:07:35.755 6856.074 - 6906.486: 37.2067% ( 240) 00:07:35.755 6906.486 - 6956.898: 38.5057% ( 217) 00:07:35.755 6956.898 - 7007.311: 39.9545% ( 242) 00:07:35.755 7007.311 - 7057.723: 41.4931% ( 257) 00:07:35.755 7057.723 - 7108.135: 42.6365% ( 191) 00:07:35.755 7108.135 - 7158.548: 43.5465% ( 152) 00:07:35.755 7158.548 - 7208.960: 44.2828% ( 123) 00:07:35.755 7208.960 - 7259.372: 44.9952% ( 119) 00:07:35.755 7259.372 - 7309.785: 45.6597% ( 111) 00:07:35.755 7309.785 - 7360.197: 46.4021% ( 124) 00:07:35.755 7360.197 - 7410.609: 47.0905% ( 115) 00:07:35.755 7410.609 - 7461.022: 48.0125% ( 154) 00:07:35.755 7461.022 - 7511.434: 48.9583% ( 158) 00:07:35.755 7511.434 - 7561.846: 50.2634% ( 218) 00:07:35.755 7561.846 - 7612.258: 51.6822% ( 237) 00:07:35.755 7612.258 - 7662.671: 53.5560% ( 313) 00:07:35.755 7662.671 - 7713.083: 55.5196% ( 328) 00:07:35.755 7713.083 - 7763.495: 57.5012% ( 331) 00:07:35.755 7763.495 - 7813.908: 59.7881% ( 382) 00:07:35.755 7813.908 - 7864.320: 62.4761% ( 449) 00:07:35.755 7864.320 - 7914.732: 65.5412% ( 512) 00:07:35.755 7914.732 - 7965.145: 68.0316% ( 416) 00:07:35.755 7965.145 - 8015.557: 69.8216% ( 299) 00:07:35.755 8015.557 - 8065.969: 71.3422% ( 254) 00:07:35.755 8065.969 - 8116.382: 72.5635% ( 204) 00:07:35.755 8116.382 - 8166.794: 73.8685% ( 218) 00:07:35.755 8166.794 - 8217.206: 75.1197% ( 209) 00:07:35.755 8217.206 - 8267.618: 76.3410% ( 204) 00:07:35.755 8267.618 - 8318.031: 77.6161% ( 213) 00:07:35.755 8318.031 - 8368.443: 78.8973% ( 214) 00:07:35.755 8368.443 - 8418.855: 80.2502% ( 226) 00:07:35.755 8418.855 - 8469.268: 81.7648% ( 253) 00:07:35.755 8469.268 - 8519.680: 83.2016% ( 240) 00:07:35.756 8519.680 - 8570.092: 84.7581% ( 260) 00:07:35.756 8570.092 - 8620.505: 86.3326% ( 263) 00:07:35.756 8620.505 - 8670.917: 87.7754% ( 241) 00:07:35.756 8670.917 - 8721.329: 88.8111% ( 173) 00:07:35.756 8721.329 - 8771.742: 89.8527% ( 174) 00:07:35.756 8771.742 - 8822.154: 90.7208% ( 145) 00:07:35.756 8822.154 - 8872.566: 91.4871% ( 128) 00:07:35.756 8872.566 - 8922.978: 92.1396% ( 109) 00:07:35.756 8922.978 - 8973.391: 92.5946% ( 76) 00:07:35.756 8973.391 - 9023.803: 92.9837% ( 65) 00:07:35.756 9023.803 - 9074.215: 93.2890% ( 51) 00:07:35.756 9074.215 - 9124.628: 93.6482% ( 60) 00:07:35.756 9124.628 - 9175.040: 94.1092% ( 77) 00:07:35.756 9175.040 - 9225.452: 94.6240% ( 86) 00:07:35.756 9225.452 - 9275.865: 94.9952% ( 62) 00:07:35.756 9275.865 - 9326.277: 95.3005% ( 51) 00:07:35.756 9326.277 - 9376.689: 95.5520% ( 42) 00:07:35.756 9376.689 - 9427.102: 95.8393% ( 48) 00:07:35.756 9427.102 - 9477.514: 96.0429% ( 34) 00:07:35.756 9477.514 - 9527.926: 96.2883% ( 41) 00:07:35.756 9527.926 - 9578.338: 96.4799% ( 32) 00:07:35.756 9578.338 - 9628.751: 96.6774% ( 33) 00:07:35.756 9628.751 - 9679.163: 96.8690% ( 32) 00:07:35.756 9679.163 - 9729.575: 97.0426% ( 29) 00:07:35.756 9729.575 - 9779.988: 97.2402% ( 33) 00:07:35.756 9779.988 - 9830.400: 97.4138% ( 29) 00:07:35.756 9830.400 - 9880.812: 97.5455% ( 22) 00:07:35.756 9880.812 - 9931.225: 97.6652% ( 20) 00:07:35.756 9931.225 - 9981.637: 97.7790% ( 19) 00:07:35.756 9981.637 - 10032.049: 97.9047% ( 21) 00:07:35.756 10032.049 - 10082.462: 98.0184% ( 19) 00:07:35.756 10082.462 - 10132.874: 98.1382% ( 20) 00:07:35.756 10132.874 - 10183.286: 98.3178% ( 30) 00:07:35.756 10183.286 - 10233.698: 98.3537% ( 6) 00:07:35.756 10233.698 - 10284.111: 98.3896% ( 6) 00:07:35.756 10284.111 - 10334.523: 98.4016% ( 2) 00:07:35.756 10334.523 - 10384.935: 98.4435% ( 7) 00:07:35.756 10384.935 - 10435.348: 98.4614% ( 3) 00:07:35.756 10435.348 - 10485.760: 98.4974% ( 6) 00:07:35.756 10485.760 - 10536.172: 98.5632% ( 11) 00:07:35.756 10536.172 - 10586.585: 98.6291% ( 11) 00:07:35.756 10586.585 - 10636.997: 98.6949% ( 11) 00:07:35.756 10636.997 - 10687.409: 98.7308% ( 6) 00:07:35.756 10687.409 - 10737.822: 98.8027% ( 12) 00:07:35.756 10737.822 - 10788.234: 98.8625% ( 10) 00:07:35.756 10788.234 - 10838.646: 98.8865% ( 4) 00:07:35.756 10838.646 - 10889.058: 98.9224% ( 6) 00:07:35.756 10889.058 - 10939.471: 98.9583% ( 6) 00:07:35.756 10939.471 - 10989.883: 98.9763% ( 3) 00:07:35.756 10989.883 - 11040.295: 99.0182% ( 7) 00:07:35.756 11040.295 - 11090.708: 99.0421% ( 4) 00:07:35.756 11090.708 - 11141.120: 99.0601% ( 3) 00:07:35.756 11141.120 - 11191.532: 99.0781% ( 3) 00:07:35.756 11191.532 - 11241.945: 99.0960% ( 3) 00:07:35.756 11241.945 - 11292.357: 99.1140% ( 3) 00:07:35.756 11292.357 - 11342.769: 99.1319% ( 3) 00:07:35.756 11342.769 - 11393.182: 99.1499% ( 3) 00:07:35.756 11393.182 - 11443.594: 99.1679% ( 3) 00:07:35.756 11443.594 - 11494.006: 99.1858% ( 3) 00:07:35.756 11494.006 - 11544.418: 99.2038% ( 3) 00:07:35.756 11544.418 - 11594.831: 99.2217% ( 3) 00:07:35.756 11594.831 - 11645.243: 99.2337% ( 2) 00:07:35.756 27020.997 - 27222.646: 99.2397% ( 1) 00:07:35.756 27222.646 - 27424.295: 99.2876% ( 8) 00:07:35.756 27424.295 - 27625.945: 99.3295% ( 7) 00:07:35.756 27625.945 - 27827.594: 99.3774% ( 8) 00:07:35.756 27827.594 - 28029.243: 99.4253% ( 8) 00:07:35.756 28029.243 - 28230.892: 99.4672% ( 7) 00:07:35.756 28230.892 - 28432.542: 99.5151% ( 8) 00:07:35.756 28432.542 - 28634.191: 99.5630% ( 8) 00:07:35.756 28634.191 - 28835.840: 99.6109% ( 8) 00:07:35.756 28835.840 - 29037.489: 99.6169% ( 1) 00:07:35.756 32667.175 - 32868.825: 99.6288% ( 2) 00:07:35.756 32868.825 - 33070.474: 99.6707% ( 7) 00:07:35.756 33070.474 - 33272.123: 99.7067% ( 6) 00:07:35.756 33272.123 - 33473.772: 99.7486% ( 7) 00:07:35.756 33473.772 - 33675.422: 99.7905% ( 7) 00:07:35.756 33675.422 - 33877.071: 99.8324% ( 7) 00:07:35.756 33877.071 - 34078.720: 99.8683% ( 6) 00:07:35.756 34078.720 - 34280.369: 99.9162% ( 8) 00:07:35.756 34280.369 - 34482.018: 99.9581% ( 7) 00:07:35.756 34482.018 - 34683.668: 99.9940% ( 6) 00:07:35.756 34683.668 - 34885.317: 100.0000% ( 1) 00:07:35.756 00:07:35.756 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:35.756 ============================================================================== 00:07:35.756 Range in us Cumulative IO count 00:07:35.756 5948.652 - 5973.858: 0.0060% ( 1) 00:07:35.756 5999.065 - 6024.271: 0.0120% ( 1) 00:07:35.756 6024.271 - 6049.477: 0.0180% ( 1) 00:07:35.756 6049.477 - 6074.683: 0.0419% ( 4) 00:07:35.756 6074.683 - 6099.889: 0.1137% ( 12) 00:07:35.756 6099.889 - 6125.095: 0.1616% ( 8) 00:07:35.756 6125.095 - 6150.302: 0.2634% ( 17) 00:07:35.756 6150.302 - 6175.508: 0.3831% ( 20) 00:07:35.756 6175.508 - 6200.714: 0.6406% ( 43) 00:07:35.756 6200.714 - 6225.920: 0.9638% ( 54) 00:07:35.756 6225.920 - 6251.126: 1.3709% ( 68) 00:07:35.756 6251.126 - 6276.332: 1.8678% ( 83) 00:07:35.756 6276.332 - 6301.538: 2.5503% ( 114) 00:07:35.756 6301.538 - 6326.745: 3.3944% ( 141) 00:07:35.756 6326.745 - 6351.951: 4.7653% ( 229) 00:07:35.756 6351.951 - 6377.157: 6.3338% ( 262) 00:07:35.756 6377.157 - 6402.363: 7.9382% ( 268) 00:07:35.756 6402.363 - 6427.569: 9.7821% ( 308) 00:07:35.756 6427.569 - 6452.775: 11.9852% ( 368) 00:07:35.756 6452.775 - 6503.188: 16.6427% ( 778) 00:07:35.756 6503.188 - 6553.600: 22.5335% ( 984) 00:07:35.756 6553.600 - 6604.012: 27.8736% ( 892) 00:07:35.756 6604.012 - 6654.425: 31.1183% ( 542) 00:07:35.756 6654.425 - 6704.837: 33.5249% ( 402) 00:07:35.756 6704.837 - 6755.249: 35.6920% ( 362) 00:07:35.756 6755.249 - 6805.662: 38.1944% ( 418) 00:07:35.756 6805.662 - 6856.074: 39.7929% ( 267) 00:07:35.756 6856.074 - 6906.486: 40.9183% ( 188) 00:07:35.756 6906.486 - 6956.898: 42.0738% ( 193) 00:07:35.756 6956.898 - 7007.311: 43.0256% ( 159) 00:07:35.756 7007.311 - 7057.723: 43.5105% ( 81) 00:07:35.756 7057.723 - 7108.135: 44.0134% ( 84) 00:07:35.756 7108.135 - 7158.548: 44.3786% ( 61) 00:07:35.756 7158.548 - 7208.960: 44.9533% ( 96) 00:07:35.756 7208.960 - 7259.372: 45.4442% ( 82) 00:07:35.756 7259.372 - 7309.785: 45.7615% ( 53) 00:07:35.756 7309.785 - 7360.197: 46.2704% ( 85) 00:07:35.756 7360.197 - 7410.609: 46.6236% ( 59) 00:07:35.756 7410.609 - 7461.022: 47.0845% ( 77) 00:07:35.756 7461.022 - 7511.434: 47.4138% ( 55) 00:07:35.756 7511.434 - 7561.846: 47.9167% ( 84) 00:07:35.756 7561.846 - 7612.258: 48.5512% ( 106) 00:07:35.756 7612.258 - 7662.671: 49.7366% ( 198) 00:07:35.756 7662.671 - 7713.083: 51.5146% ( 297) 00:07:35.756 7713.083 - 7763.495: 53.1609% ( 275) 00:07:35.756 7763.495 - 7813.908: 55.2682% ( 352) 00:07:35.756 7813.908 - 7864.320: 57.3635% ( 350) 00:07:35.756 7864.320 - 7914.732: 59.7162% ( 393) 00:07:35.756 7914.732 - 7965.145: 62.2545% ( 424) 00:07:35.756 7965.145 - 8015.557: 65.6310% ( 564) 00:07:35.756 8015.557 - 8065.969: 68.7560% ( 522) 00:07:35.756 8065.969 - 8116.382: 71.6715% ( 487) 00:07:35.756 8116.382 - 8166.794: 74.8443% ( 530) 00:07:35.756 8166.794 - 8217.206: 77.6580% ( 470) 00:07:35.756 8217.206 - 8267.618: 80.1904% ( 423) 00:07:35.756 8267.618 - 8318.031: 82.1779% ( 332) 00:07:35.756 8318.031 - 8368.443: 83.7524% ( 263) 00:07:35.756 8368.443 - 8418.855: 85.4047% ( 276) 00:07:35.756 8418.855 - 8469.268: 86.7696% ( 228) 00:07:35.756 8469.268 - 8519.680: 87.8831% ( 186) 00:07:35.756 8519.680 - 8570.092: 88.8530% ( 162) 00:07:35.756 8570.092 - 8620.505: 89.5594% ( 118) 00:07:35.756 8620.505 - 8670.917: 90.0623% ( 84) 00:07:35.756 8670.917 - 8721.329: 90.4634% ( 67) 00:07:35.756 8721.329 - 8771.742: 90.8166% ( 59) 00:07:35.756 8771.742 - 8822.154: 91.2057% ( 65) 00:07:35.756 8822.154 - 8872.566: 91.5948% ( 65) 00:07:35.756 8872.566 - 8922.978: 92.1336% ( 90) 00:07:35.756 8922.978 - 8973.391: 92.7143% ( 97) 00:07:35.756 8973.391 - 9023.803: 93.2292% ( 86) 00:07:35.756 9023.803 - 9074.215: 93.6183% ( 65) 00:07:35.756 9074.215 - 9124.628: 94.0074% ( 65) 00:07:35.756 9124.628 - 9175.040: 94.3846% ( 63) 00:07:35.756 9175.040 - 9225.452: 94.7258% ( 57) 00:07:35.756 9225.452 - 9275.865: 95.0850% ( 60) 00:07:35.756 9275.865 - 9326.277: 95.4682% ( 64) 00:07:35.756 9326.277 - 9376.689: 96.0608% ( 99) 00:07:35.756 9376.689 - 9427.102: 96.3182% ( 43) 00:07:35.756 9427.102 - 9477.514: 96.6056% ( 48) 00:07:35.756 9477.514 - 9527.926: 96.7972% ( 32) 00:07:35.756 9527.926 - 9578.338: 97.0546% ( 43) 00:07:35.756 9578.338 - 9628.751: 97.1983% ( 24) 00:07:35.756 9628.751 - 9679.163: 97.3060% ( 18) 00:07:35.756 9679.163 - 9729.575: 97.4138% ( 18) 00:07:35.756 9729.575 - 9779.988: 97.4856% ( 12) 00:07:35.756 9779.988 - 9830.400: 97.5515% ( 11) 00:07:35.756 9830.400 - 9880.812: 97.6054% ( 9) 00:07:35.756 9880.812 - 9931.225: 97.6353% ( 5) 00:07:35.756 9931.225 - 9981.637: 97.6533% ( 3) 00:07:35.756 9981.637 - 10032.049: 97.6772% ( 4) 00:07:35.756 10032.049 - 10082.462: 97.7011% ( 4) 00:07:35.756 10082.462 - 10132.874: 97.7251% ( 4) 00:07:35.756 10132.874 - 10183.286: 97.7730% ( 8) 00:07:35.756 10183.286 - 10233.698: 97.8269% ( 9) 00:07:35.756 10233.698 - 10284.111: 97.9107% ( 14) 00:07:35.756 10284.111 - 10334.523: 98.0903% ( 30) 00:07:35.756 10334.523 - 10384.935: 98.1561% ( 11) 00:07:35.756 10384.935 - 10435.348: 98.2040% ( 8) 00:07:35.756 10435.348 - 10485.760: 98.2938% ( 15) 00:07:35.756 10485.760 - 10536.172: 98.3776% ( 14) 00:07:35.756 10536.172 - 10586.585: 98.4674% ( 15) 00:07:35.756 10586.585 - 10636.997: 98.6949% ( 38) 00:07:35.756 10636.997 - 10687.409: 98.7787% ( 14) 00:07:35.756 10687.409 - 10737.822: 98.8566% ( 13) 00:07:35.756 10737.822 - 10788.234: 98.9104% ( 9) 00:07:35.756 10788.234 - 10838.646: 98.9703% ( 10) 00:07:35.757 10838.646 - 10889.058: 99.0302% ( 10) 00:07:35.757 10889.058 - 10939.471: 99.0721% ( 7) 00:07:35.757 10939.471 - 10989.883: 99.0900% ( 3) 00:07:35.757 10989.883 - 11040.295: 99.1140% ( 4) 00:07:35.757 11040.295 - 11090.708: 99.1379% ( 4) 00:07:35.757 11090.708 - 11141.120: 99.1559% ( 3) 00:07:35.757 11141.120 - 11191.532: 99.1798% ( 4) 00:07:35.757 11191.532 - 11241.945: 99.2038% ( 4) 00:07:35.757 11241.945 - 11292.357: 99.2217% ( 3) 00:07:35.757 11292.357 - 11342.769: 99.2337% ( 2) 00:07:35.757 25508.628 - 25609.452: 99.2577% ( 4) 00:07:35.757 25609.452 - 25710.277: 99.2876% ( 5) 00:07:35.757 25710.277 - 25811.102: 99.3115% ( 4) 00:07:35.757 25811.102 - 26012.751: 99.3594% ( 8) 00:07:35.757 26012.751 - 26214.400: 99.4073% ( 8) 00:07:35.757 26214.400 - 26416.049: 99.4612% ( 9) 00:07:35.757 26416.049 - 26617.698: 99.5091% ( 8) 00:07:35.757 26617.698 - 26819.348: 99.5630% ( 9) 00:07:35.757 26819.348 - 27020.997: 99.6109% ( 8) 00:07:35.757 27020.997 - 27222.646: 99.6169% ( 1) 00:07:35.757 31255.631 - 31457.280: 99.6528% ( 6) 00:07:35.757 31457.280 - 31658.929: 99.6947% ( 7) 00:07:35.757 31658.929 - 31860.578: 99.7366% ( 7) 00:07:35.757 31860.578 - 32062.228: 99.7845% ( 8) 00:07:35.757 32062.228 - 32263.877: 99.8264% ( 7) 00:07:35.757 32263.877 - 32465.526: 99.8683% ( 7) 00:07:35.757 32465.526 - 32667.175: 99.9162% ( 8) 00:07:35.757 32667.175 - 32868.825: 99.9641% ( 8) 00:07:35.757 32868.825 - 33070.474: 100.0000% ( 6) 00:07:35.757 00:07:35.757 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:35.757 ============================================================================== 00:07:35.757 Range in us Cumulative IO count 00:07:35.757 5898.240 - 5923.446: 0.0060% ( 1) 00:07:35.757 5923.446 - 5948.652: 0.0239% ( 3) 00:07:35.757 5948.652 - 5973.858: 0.0359% ( 2) 00:07:35.757 5973.858 - 5999.065: 0.0479% ( 2) 00:07:35.757 5999.065 - 6024.271: 0.0599% ( 2) 00:07:35.757 6049.477 - 6074.683: 0.0898% ( 5) 00:07:35.757 6074.683 - 6099.889: 0.1197% ( 5) 00:07:35.757 6099.889 - 6125.095: 0.1676% ( 8) 00:07:35.757 6125.095 - 6150.302: 0.2874% ( 20) 00:07:35.757 6150.302 - 6175.508: 0.4849% ( 33) 00:07:35.757 6175.508 - 6200.714: 0.7004% ( 36) 00:07:35.757 6200.714 - 6225.920: 1.0117% ( 52) 00:07:35.757 6225.920 - 6251.126: 1.3470% ( 56) 00:07:35.757 6251.126 - 6276.332: 1.8319% ( 81) 00:07:35.757 6276.332 - 6301.538: 2.5263% ( 116) 00:07:35.757 6301.538 - 6326.745: 3.7536% ( 205) 00:07:35.757 6326.745 - 6351.951: 4.9988% ( 208) 00:07:35.757 6351.951 - 6377.157: 6.4116% ( 236) 00:07:35.757 6377.157 - 6402.363: 7.8903% ( 247) 00:07:35.757 6402.363 - 6427.569: 9.8958% ( 335) 00:07:35.757 6427.569 - 6452.775: 11.9971% ( 351) 00:07:35.757 6452.775 - 6503.188: 16.9301% ( 824) 00:07:35.757 6503.188 - 6553.600: 21.6894% ( 795) 00:07:35.757 6553.600 - 6604.012: 27.2629% ( 931) 00:07:35.757 6604.012 - 6654.425: 30.2203% ( 494) 00:07:35.757 6654.425 - 6704.837: 33.2495% ( 506) 00:07:35.757 6704.837 - 6755.249: 36.0632% ( 470) 00:07:35.757 6755.249 - 6805.662: 38.5177% ( 410) 00:07:35.757 6805.662 - 6856.074: 39.9425% ( 238) 00:07:35.757 6856.074 - 6906.486: 41.3494% ( 235) 00:07:35.757 6906.486 - 6956.898: 42.3252% ( 163) 00:07:35.757 6956.898 - 7007.311: 42.9179% ( 99) 00:07:35.757 7007.311 - 7057.723: 43.6722% ( 126) 00:07:35.757 7057.723 - 7108.135: 44.2170% ( 91) 00:07:35.757 7108.135 - 7158.548: 44.5342% ( 53) 00:07:35.757 7158.548 - 7208.960: 44.9473% ( 69) 00:07:35.757 7208.960 - 7259.372: 45.4562% ( 85) 00:07:35.757 7259.372 - 7309.785: 45.8752% ( 70) 00:07:35.757 7309.785 - 7360.197: 46.5038% ( 105) 00:07:35.757 7360.197 - 7410.609: 46.9289% ( 71) 00:07:35.757 7410.609 - 7461.022: 47.2102% ( 47) 00:07:35.757 7461.022 - 7511.434: 47.8029% ( 99) 00:07:35.757 7511.434 - 7561.846: 48.4674% ( 111) 00:07:35.757 7561.846 - 7612.258: 49.4373% ( 162) 00:07:35.757 7612.258 - 7662.671: 50.7423% ( 218) 00:07:35.757 7662.671 - 7713.083: 52.4066% ( 278) 00:07:35.757 7713.083 - 7763.495: 54.0529% ( 275) 00:07:35.757 7763.495 - 7813.908: 55.7651% ( 286) 00:07:35.757 7813.908 - 7864.320: 57.6568% ( 316) 00:07:35.757 7864.320 - 7914.732: 59.7941% ( 357) 00:07:35.757 7914.732 - 7965.145: 62.2067% ( 403) 00:07:35.757 7965.145 - 8015.557: 65.0503% ( 475) 00:07:35.757 8015.557 - 8065.969: 68.3788% ( 556) 00:07:35.757 8065.969 - 8116.382: 71.6295% ( 543) 00:07:35.757 8116.382 - 8166.794: 74.3774% ( 459) 00:07:35.757 8166.794 - 8217.206: 76.9157% ( 424) 00:07:35.757 8217.206 - 8267.618: 79.5199% ( 435) 00:07:35.757 8267.618 - 8318.031: 81.4416% ( 321) 00:07:35.757 8318.031 - 8368.443: 83.1956% ( 293) 00:07:35.757 8368.443 - 8418.855: 84.7462% ( 259) 00:07:35.757 8418.855 - 8469.268: 86.3266% ( 264) 00:07:35.757 8469.268 - 8519.680: 87.7335% ( 235) 00:07:35.757 8519.680 - 8570.092: 88.8111% ( 180) 00:07:35.757 8570.092 - 8620.505: 89.7031% ( 149) 00:07:35.757 8620.505 - 8670.917: 90.3556% ( 109) 00:07:35.757 8670.917 - 8721.329: 90.9243% ( 95) 00:07:35.757 8721.329 - 8771.742: 91.3135% ( 65) 00:07:35.757 8771.742 - 8822.154: 91.6547% ( 57) 00:07:35.757 8822.154 - 8872.566: 92.3132% ( 110) 00:07:35.757 8872.566 - 8922.978: 92.7323% ( 70) 00:07:35.757 8922.978 - 8973.391: 93.2232% ( 82) 00:07:35.757 8973.391 - 9023.803: 93.7141% ( 82) 00:07:35.757 9023.803 - 9074.215: 94.2170% ( 84) 00:07:35.757 9074.215 - 9124.628: 94.5522% ( 56) 00:07:35.757 9124.628 - 9175.040: 94.8336% ( 47) 00:07:35.757 9175.040 - 9225.452: 95.1329% ( 50) 00:07:35.757 9225.452 - 9275.865: 95.3843% ( 42) 00:07:35.757 9275.865 - 9326.277: 95.6777% ( 49) 00:07:35.757 9326.277 - 9376.689: 96.0728% ( 66) 00:07:35.757 9376.689 - 9427.102: 96.4380% ( 61) 00:07:35.757 9427.102 - 9477.514: 96.6715% ( 39) 00:07:35.757 9477.514 - 9527.926: 96.8391% ( 28) 00:07:35.757 9527.926 - 9578.338: 97.0247% ( 31) 00:07:35.757 9578.338 - 9628.751: 97.1624% ( 23) 00:07:35.757 9628.751 - 9679.163: 97.3120% ( 25) 00:07:35.757 9679.163 - 9729.575: 97.4677% ( 26) 00:07:35.757 9729.575 - 9779.988: 97.6173% ( 25) 00:07:35.757 9779.988 - 9830.400: 97.7011% ( 14) 00:07:35.757 9830.400 - 9880.812: 97.7670% ( 11) 00:07:35.757 9880.812 - 9931.225: 97.8089% ( 7) 00:07:35.757 9931.225 - 9981.637: 97.8748% ( 11) 00:07:35.757 9981.637 - 10032.049: 97.9406% ( 11) 00:07:35.757 10032.049 - 10082.462: 97.9885% ( 8) 00:07:35.757 10082.462 - 10132.874: 98.0603% ( 12) 00:07:35.757 10132.874 - 10183.286: 98.1442% ( 14) 00:07:35.757 10183.286 - 10233.698: 98.2100% ( 11) 00:07:35.757 10233.698 - 10284.111: 98.2639% ( 9) 00:07:35.757 10284.111 - 10334.523: 98.3417% ( 13) 00:07:35.757 10334.523 - 10384.935: 98.4136% ( 12) 00:07:35.757 10384.935 - 10435.348: 98.4794% ( 11) 00:07:35.757 10435.348 - 10485.760: 98.5393% ( 10) 00:07:35.757 10485.760 - 10536.172: 98.6051% ( 11) 00:07:35.757 10536.172 - 10586.585: 98.6530% ( 8) 00:07:35.757 10586.585 - 10636.997: 98.6889% ( 6) 00:07:35.757 10636.997 - 10687.409: 98.7129% ( 4) 00:07:35.757 10687.409 - 10737.822: 98.7249% ( 2) 00:07:35.757 10737.822 - 10788.234: 98.7368% ( 2) 00:07:35.757 10788.234 - 10838.646: 98.7488% ( 2) 00:07:35.757 10838.646 - 10889.058: 98.7608% ( 2) 00:07:35.757 10889.058 - 10939.471: 98.7727% ( 2) 00:07:35.757 10939.471 - 10989.883: 98.7847% ( 2) 00:07:35.757 10989.883 - 11040.295: 98.8087% ( 4) 00:07:35.757 11040.295 - 11090.708: 98.8566% ( 8) 00:07:35.757 11090.708 - 11141.120: 98.9224% ( 11) 00:07:35.757 11141.120 - 11191.532: 98.9883% ( 11) 00:07:35.757 11191.532 - 11241.945: 99.0601% ( 12) 00:07:35.757 11241.945 - 11292.357: 99.0781% ( 3) 00:07:35.757 11292.357 - 11342.769: 99.0900% ( 2) 00:07:35.757 11342.769 - 11393.182: 99.1020% ( 2) 00:07:35.757 11393.182 - 11443.594: 99.1140% ( 2) 00:07:35.757 11443.594 - 11494.006: 99.1260% ( 2) 00:07:35.757 11494.006 - 11544.418: 99.1439% ( 3) 00:07:35.757 11544.418 - 11594.831: 99.1559% ( 2) 00:07:35.757 11594.831 - 11645.243: 99.1679% ( 2) 00:07:35.757 11645.243 - 11695.655: 99.1858% ( 3) 00:07:35.757 11695.655 - 11746.068: 99.2098% ( 4) 00:07:35.757 11746.068 - 11796.480: 99.2337% ( 4) 00:07:35.757 24500.382 - 24601.206: 99.2457% ( 2) 00:07:35.757 24601.206 - 24702.031: 99.2636% ( 3) 00:07:35.757 24702.031 - 24802.855: 99.2876% ( 4) 00:07:35.757 24802.855 - 24903.680: 99.3115% ( 4) 00:07:35.757 24903.680 - 25004.505: 99.3355% ( 4) 00:07:35.757 25004.505 - 25105.329: 99.3594% ( 4) 00:07:35.757 25105.329 - 25206.154: 99.3774% ( 3) 00:07:35.757 25206.154 - 25306.978: 99.4013% ( 4) 00:07:35.757 25306.978 - 25407.803: 99.4253% ( 4) 00:07:35.757 25407.803 - 25508.628: 99.4492% ( 4) 00:07:35.757 25508.628 - 25609.452: 99.4672% ( 3) 00:07:35.757 25609.452 - 25710.277: 99.4911% ( 4) 00:07:35.757 25710.277 - 25811.102: 99.5151% ( 4) 00:07:35.757 25811.102 - 26012.751: 99.5570% ( 7) 00:07:35.757 26012.751 - 26214.400: 99.6049% ( 8) 00:07:35.757 26214.400 - 26416.049: 99.6169% ( 2) 00:07:35.757 30045.735 - 30247.385: 99.6348% ( 3) 00:07:35.757 30247.385 - 30449.034: 99.6767% ( 7) 00:07:35.757 30449.034 - 30650.683: 99.7186% ( 7) 00:07:35.757 30650.683 - 30852.332: 99.7605% ( 7) 00:07:35.757 30852.332 - 31053.982: 99.8084% ( 8) 00:07:35.757 31053.982 - 31255.631: 99.8503% ( 7) 00:07:35.757 31255.631 - 31457.280: 99.8982% ( 8) 00:07:35.757 31457.280 - 31658.929: 99.9401% ( 7) 00:07:35.757 31658.929 - 31860.578: 99.9880% ( 8) 00:07:35.757 31860.578 - 32062.228: 100.0000% ( 2) 00:07:35.757 00:07:35.757 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:35.757 ============================================================================== 00:07:35.757 Range in us Cumulative IO count 00:07:35.757 5973.858 - 5999.065: 0.0120% ( 2) 00:07:35.757 5999.065 - 6024.271: 0.0239% ( 2) 00:07:35.757 6024.271 - 6049.477: 0.0359% ( 2) 00:07:35.757 6049.477 - 6074.683: 0.0718% ( 6) 00:07:35.757 6074.683 - 6099.889: 0.1317% ( 10) 00:07:35.758 6099.889 - 6125.095: 0.2095% ( 13) 00:07:35.758 6125.095 - 6150.302: 0.3652% ( 26) 00:07:35.758 6150.302 - 6175.508: 0.5568% ( 32) 00:07:35.758 6175.508 - 6200.714: 0.7184% ( 27) 00:07:35.758 6200.714 - 6225.920: 0.9818% ( 44) 00:07:35.758 6225.920 - 6251.126: 1.3051% ( 54) 00:07:35.758 6251.126 - 6276.332: 1.7421% ( 73) 00:07:35.758 6276.332 - 6301.538: 2.5982% ( 143) 00:07:35.758 6301.538 - 6326.745: 3.4782% ( 147) 00:07:35.758 6326.745 - 6351.951: 4.7653% ( 215) 00:07:35.758 6351.951 - 6377.157: 6.2201% ( 243) 00:07:35.758 6377.157 - 6402.363: 7.6868% ( 245) 00:07:35.758 6402.363 - 6427.569: 9.6863% ( 334) 00:07:35.758 6427.569 - 6452.775: 11.6918% ( 335) 00:07:35.758 6452.775 - 6503.188: 16.6547% ( 829) 00:07:35.758 6503.188 - 6553.600: 22.8807% ( 1040) 00:07:35.758 6553.600 - 6604.012: 26.9397% ( 678) 00:07:35.758 6604.012 - 6654.425: 30.2862% ( 559) 00:07:35.758 6654.425 - 6704.837: 33.8482% ( 595) 00:07:35.758 6704.837 - 6755.249: 36.4344% ( 432) 00:07:35.758 6755.249 - 6805.662: 38.6434% ( 369) 00:07:35.758 6805.662 - 6856.074: 40.1880% ( 258) 00:07:35.758 6856.074 - 6906.486: 41.2237% ( 173) 00:07:35.758 6906.486 - 6956.898: 42.1336% ( 152) 00:07:35.758 6956.898 - 7007.311: 42.6964% ( 94) 00:07:35.758 7007.311 - 7057.723: 43.5285% ( 139) 00:07:35.758 7057.723 - 7108.135: 43.8578% ( 55) 00:07:35.758 7108.135 - 7158.548: 44.2110% ( 59) 00:07:35.758 7158.548 - 7208.960: 44.5522% ( 57) 00:07:35.758 7208.960 - 7259.372: 44.9473% ( 66) 00:07:35.758 7259.372 - 7309.785: 45.6477% ( 117) 00:07:35.758 7309.785 - 7360.197: 46.1027% ( 76) 00:07:35.758 7360.197 - 7410.609: 46.6056% ( 84) 00:07:35.758 7410.609 - 7461.022: 47.1564% ( 92) 00:07:35.758 7461.022 - 7511.434: 47.8269% ( 112) 00:07:35.758 7511.434 - 7561.846: 48.6470% ( 137) 00:07:35.758 7561.846 - 7612.258: 49.8384% ( 199) 00:07:35.758 7612.258 - 7662.671: 51.1315% ( 216) 00:07:35.758 7662.671 - 7713.083: 52.6221% ( 249) 00:07:35.758 7713.083 - 7763.495: 54.1607% ( 257) 00:07:35.758 7763.495 - 7813.908: 56.0405% ( 314) 00:07:35.758 7813.908 - 7864.320: 58.1238% ( 348) 00:07:35.758 7864.320 - 7914.732: 60.4167% ( 383) 00:07:35.758 7914.732 - 7965.145: 62.9131% ( 417) 00:07:35.758 7965.145 - 8015.557: 65.8585% ( 492) 00:07:35.758 8015.557 - 8065.969: 68.9356% ( 514) 00:07:35.758 8065.969 - 8116.382: 71.9947% ( 511) 00:07:35.758 8116.382 - 8166.794: 74.8324% ( 474) 00:07:35.758 8166.794 - 8217.206: 77.2749% ( 408) 00:07:35.758 8217.206 - 8267.618: 79.6276% ( 393) 00:07:35.758 8267.618 - 8318.031: 81.3757% ( 292) 00:07:35.758 8318.031 - 8368.443: 82.8065% ( 239) 00:07:35.758 8368.443 - 8418.855: 84.4528% ( 275) 00:07:35.758 8418.855 - 8469.268: 85.9555% ( 251) 00:07:35.758 8469.268 - 8519.680: 87.3623% ( 235) 00:07:35.758 8519.680 - 8570.092: 88.6614% ( 217) 00:07:35.758 8570.092 - 8620.505: 89.5594% ( 150) 00:07:35.758 8620.505 - 8670.917: 90.4574% ( 150) 00:07:35.758 8670.917 - 8721.329: 91.0321% ( 96) 00:07:35.758 8721.329 - 8771.742: 91.4811% ( 75) 00:07:35.758 8771.742 - 8822.154: 91.7984% ( 53) 00:07:35.758 8822.154 - 8872.566: 92.0977% ( 50) 00:07:35.758 8872.566 - 8922.978: 92.5048% ( 68) 00:07:35.758 8922.978 - 8973.391: 92.8520% ( 58) 00:07:35.758 8973.391 - 9023.803: 93.3429% ( 82) 00:07:35.758 9023.803 - 9074.215: 93.9116% ( 95) 00:07:35.758 9074.215 - 9124.628: 94.2828% ( 62) 00:07:35.758 9124.628 - 9175.040: 94.7198% ( 73) 00:07:35.758 9175.040 - 9225.452: 95.0730% ( 59) 00:07:35.758 9225.452 - 9275.865: 95.4562% ( 64) 00:07:35.758 9275.865 - 9326.277: 95.7974% ( 57) 00:07:35.758 9326.277 - 9376.689: 96.2584% ( 77) 00:07:35.758 9376.689 - 9427.102: 96.5278% ( 45) 00:07:35.758 9427.102 - 9477.514: 96.7134% ( 31) 00:07:35.758 9477.514 - 9527.926: 96.8750% ( 27) 00:07:35.758 9527.926 - 9578.338: 97.0546% ( 30) 00:07:35.758 9578.338 - 9628.751: 97.1983% ( 24) 00:07:35.758 9628.751 - 9679.163: 97.3539% ( 26) 00:07:35.758 9679.163 - 9729.575: 97.5216% ( 28) 00:07:35.758 9729.575 - 9779.988: 97.6473% ( 21) 00:07:35.758 9779.988 - 9830.400: 97.7610% ( 19) 00:07:35.758 9830.400 - 9880.812: 97.8508% ( 15) 00:07:35.758 9880.812 - 9931.225: 97.9705% ( 20) 00:07:35.758 9931.225 - 9981.637: 98.0244% ( 9) 00:07:35.758 9981.637 - 10032.049: 98.0903% ( 11) 00:07:35.758 10032.049 - 10082.462: 98.1382% ( 8) 00:07:35.758 10082.462 - 10132.874: 98.2280% ( 15) 00:07:35.758 10132.874 - 10183.286: 98.3297% ( 17) 00:07:35.758 10183.286 - 10233.698: 98.4136% ( 14) 00:07:35.758 10233.698 - 10284.111: 98.4794% ( 11) 00:07:35.758 10284.111 - 10334.523: 98.5333% ( 9) 00:07:35.758 10334.523 - 10384.935: 98.5872% ( 9) 00:07:35.758 10384.935 - 10435.348: 98.6530% ( 11) 00:07:35.758 10435.348 - 10485.760: 98.6830% ( 5) 00:07:35.758 10485.760 - 10536.172: 98.7069% ( 4) 00:07:35.758 10536.172 - 10586.585: 98.7308% ( 4) 00:07:35.758 10586.585 - 10636.997: 98.7548% ( 4) 00:07:35.758 10636.997 - 10687.409: 98.7727% ( 3) 00:07:35.758 10687.409 - 10737.822: 98.7967% ( 4) 00:07:35.758 10737.822 - 10788.234: 98.8147% ( 3) 00:07:35.758 10788.234 - 10838.646: 98.8326% ( 3) 00:07:35.758 10838.646 - 10889.058: 98.8506% ( 3) 00:07:35.758 11342.769 - 11393.182: 98.8566% ( 1) 00:07:35.758 11393.182 - 11443.594: 98.8745% ( 3) 00:07:35.758 11443.594 - 11494.006: 98.8985% ( 4) 00:07:35.758 11494.006 - 11544.418: 98.9224% ( 4) 00:07:35.758 11544.418 - 11594.831: 98.9404% ( 3) 00:07:35.758 11594.831 - 11645.243: 98.9643% ( 4) 00:07:35.758 11645.243 - 11695.655: 98.9823% ( 3) 00:07:35.758 11695.655 - 11746.068: 99.0062% ( 4) 00:07:35.758 11746.068 - 11796.480: 99.0242% ( 3) 00:07:35.758 11796.480 - 11846.892: 99.0421% ( 3) 00:07:35.758 11846.892 - 11897.305: 99.0601% ( 3) 00:07:35.758 11897.305 - 11947.717: 99.0781% ( 3) 00:07:35.758 11947.717 - 11998.129: 99.1020% ( 4) 00:07:35.758 11998.129 - 12048.542: 99.1260% ( 4) 00:07:35.758 12048.542 - 12098.954: 99.1439% ( 3) 00:07:35.758 12098.954 - 12149.366: 99.1679% ( 4) 00:07:35.758 12149.366 - 12199.778: 99.1918% ( 4) 00:07:35.758 12199.778 - 12250.191: 99.2098% ( 3) 00:07:35.758 12250.191 - 12300.603: 99.2337% ( 4) 00:07:35.758 22988.012 - 23088.837: 99.2457% ( 2) 00:07:35.758 23088.837 - 23189.662: 99.2696% ( 4) 00:07:35.758 23189.662 - 23290.486: 99.2936% ( 4) 00:07:35.758 23290.486 - 23391.311: 99.3115% ( 3) 00:07:35.758 23391.311 - 23492.135: 99.3355% ( 4) 00:07:35.758 23492.135 - 23592.960: 99.3594% ( 4) 00:07:35.758 23592.960 - 23693.785: 99.3834% ( 4) 00:07:35.758 23693.785 - 23794.609: 99.4073% ( 4) 00:07:35.758 23794.609 - 23895.434: 99.4253% ( 3) 00:07:35.758 23895.434 - 23996.258: 99.4492% ( 4) 00:07:35.758 23996.258 - 24097.083: 99.4732% ( 4) 00:07:35.758 24097.083 - 24197.908: 99.4971% ( 4) 00:07:35.758 24197.908 - 24298.732: 99.5211% ( 4) 00:07:35.758 24298.732 - 24399.557: 99.5390% ( 3) 00:07:35.758 24399.557 - 24500.382: 99.5630% ( 4) 00:07:35.758 24500.382 - 24601.206: 99.5809% ( 3) 00:07:35.758 24601.206 - 24702.031: 99.6049% ( 4) 00:07:35.758 24702.031 - 24802.855: 99.6169% ( 2) 00:07:35.758 28432.542 - 28634.191: 99.6228% ( 1) 00:07:35.758 28634.191 - 28835.840: 99.6648% ( 7) 00:07:35.758 28835.840 - 29037.489: 99.7126% ( 8) 00:07:35.758 29037.489 - 29239.138: 99.7605% ( 8) 00:07:35.758 29239.138 - 29440.788: 99.8084% ( 8) 00:07:35.758 29440.788 - 29642.437: 99.8503% ( 7) 00:07:35.758 29642.437 - 29844.086: 99.8982% ( 8) 00:07:35.758 29844.086 - 30045.735: 99.9401% ( 7) 00:07:35.758 30045.735 - 30247.385: 99.9820% ( 7) 00:07:35.758 30247.385 - 30449.034: 100.0000% ( 3) 00:07:35.758 00:07:35.758 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:35.758 ============================================================================== 00:07:35.758 Range in us Cumulative IO count 00:07:35.758 5923.446 - 5948.652: 0.0060% ( 1) 00:07:35.758 5973.858 - 5999.065: 0.0180% ( 2) 00:07:35.758 5999.065 - 6024.271: 0.0299% ( 2) 00:07:35.758 6024.271 - 6049.477: 0.0419% ( 2) 00:07:35.758 6049.477 - 6074.683: 0.0479% ( 1) 00:07:35.758 6074.683 - 6099.889: 0.0718% ( 4) 00:07:35.758 6099.889 - 6125.095: 0.1137% ( 7) 00:07:35.758 6125.095 - 6150.302: 0.2275% ( 19) 00:07:35.758 6150.302 - 6175.508: 0.3592% ( 22) 00:07:35.758 6175.508 - 6200.714: 0.5508% ( 32) 00:07:35.758 6200.714 - 6225.920: 0.8142% ( 44) 00:07:35.758 6225.920 - 6251.126: 1.1973% ( 64) 00:07:35.758 6251.126 - 6276.332: 1.8199% ( 104) 00:07:35.758 6276.332 - 6301.538: 2.4844% ( 111) 00:07:35.758 6301.538 - 6326.745: 3.4543% ( 162) 00:07:35.758 6326.745 - 6351.951: 4.5079% ( 176) 00:07:35.758 6351.951 - 6377.157: 6.0045% ( 250) 00:07:35.758 6377.157 - 6402.363: 7.3695% ( 228) 00:07:35.758 6402.363 - 6427.569: 9.1715% ( 301) 00:07:35.758 6427.569 - 6452.775: 11.3446% ( 363) 00:07:35.758 6452.775 - 6503.188: 16.7744% ( 907) 00:07:35.758 6503.188 - 6553.600: 22.0965% ( 889) 00:07:35.758 6553.600 - 6604.012: 27.0055% ( 820) 00:07:35.758 6604.012 - 6654.425: 30.6813% ( 614) 00:07:35.758 6654.425 - 6704.837: 33.6147% ( 490) 00:07:35.758 6704.837 - 6755.249: 36.2488% ( 440) 00:07:35.758 6755.249 - 6805.662: 38.5596% ( 386) 00:07:35.758 6805.662 - 6856.074: 39.7809% ( 204) 00:07:35.758 6856.074 - 6906.486: 40.6130% ( 139) 00:07:35.758 6906.486 - 6956.898: 41.8403% ( 205) 00:07:35.758 6956.898 - 7007.311: 42.7443% ( 151) 00:07:35.758 7007.311 - 7057.723: 43.5704% ( 138) 00:07:35.758 7057.723 - 7108.135: 43.9176% ( 58) 00:07:35.758 7108.135 - 7158.548: 44.3427% ( 71) 00:07:35.758 7158.548 - 7208.960: 44.8216% ( 80) 00:07:35.758 7208.960 - 7259.372: 45.2826% ( 77) 00:07:35.758 7259.372 - 7309.785: 45.8154% ( 89) 00:07:35.758 7309.785 - 7360.197: 46.3542% ( 90) 00:07:35.758 7360.197 - 7410.609: 46.9708% ( 103) 00:07:35.758 7410.609 - 7461.022: 47.6892% ( 120) 00:07:35.758 7461.022 - 7511.434: 48.2998% ( 102) 00:07:35.758 7511.434 - 7561.846: 48.8685% ( 95) 00:07:35.759 7561.846 - 7612.258: 49.6588% ( 132) 00:07:35.759 7612.258 - 7662.671: 50.7064% ( 175) 00:07:35.759 7662.671 - 7713.083: 52.1252% ( 237) 00:07:35.759 7713.083 - 7763.495: 53.6698% ( 258) 00:07:35.759 7763.495 - 7813.908: 55.2682% ( 267) 00:07:35.759 7813.908 - 7864.320: 57.1899% ( 321) 00:07:35.759 7864.320 - 7914.732: 59.3570% ( 362) 00:07:35.759 7914.732 - 7965.145: 61.9971% ( 441) 00:07:35.759 7965.145 - 8015.557: 65.0682% ( 513) 00:07:35.759 8015.557 - 8065.969: 68.1932% ( 522) 00:07:35.759 8065.969 - 8116.382: 71.7014% ( 586) 00:07:35.759 8116.382 - 8166.794: 75.2754% ( 597) 00:07:35.759 8166.794 - 8217.206: 78.0951% ( 471) 00:07:35.759 8217.206 - 8267.618: 80.7830% ( 449) 00:07:35.759 8267.618 - 8318.031: 82.5611% ( 297) 00:07:35.759 8318.031 - 8368.443: 84.0398% ( 247) 00:07:35.759 8368.443 - 8418.855: 85.4885% ( 242) 00:07:35.759 8418.855 - 8469.268: 86.9193% ( 239) 00:07:35.759 8469.268 - 8519.680: 88.2723% ( 226) 00:07:35.759 8519.680 - 8570.092: 89.3678% ( 183) 00:07:35.759 8570.092 - 8620.505: 90.0802% ( 119) 00:07:35.759 8620.505 - 8670.917: 90.5771% ( 83) 00:07:35.759 8670.917 - 8721.329: 90.9303% ( 59) 00:07:35.759 8721.329 - 8771.742: 91.2775% ( 58) 00:07:35.759 8771.742 - 8822.154: 91.7146% ( 73) 00:07:35.759 8822.154 - 8872.566: 92.2114% ( 83) 00:07:35.759 8872.566 - 8922.978: 92.4988% ( 48) 00:07:35.759 8922.978 - 8973.391: 92.7323% ( 39) 00:07:35.759 8973.391 - 9023.803: 93.0735% ( 57) 00:07:35.759 9023.803 - 9074.215: 93.5225% ( 75) 00:07:35.759 9074.215 - 9124.628: 93.9356% ( 69) 00:07:35.759 9124.628 - 9175.040: 94.2648% ( 55) 00:07:35.759 9175.040 - 9225.452: 94.5522% ( 48) 00:07:35.759 9225.452 - 9275.865: 94.9114% ( 60) 00:07:35.759 9275.865 - 9326.277: 95.2886% ( 63) 00:07:35.759 9326.277 - 9376.689: 95.6477% ( 60) 00:07:35.759 9376.689 - 9427.102: 96.1566% ( 85) 00:07:35.759 9427.102 - 9477.514: 96.4919% ( 56) 00:07:35.759 9477.514 - 9527.926: 96.6894% ( 33) 00:07:35.759 9527.926 - 9578.338: 96.9049% ( 36) 00:07:35.759 9578.338 - 9628.751: 97.0965% ( 32) 00:07:35.759 9628.751 - 9679.163: 97.2881% ( 32) 00:07:35.759 9679.163 - 9729.575: 97.5036% ( 36) 00:07:35.759 9729.575 - 9779.988: 97.6712% ( 28) 00:07:35.759 9779.988 - 9830.400: 97.9406% ( 45) 00:07:35.759 9830.400 - 9880.812: 98.0903% ( 25) 00:07:35.759 9880.812 - 9931.225: 98.2220% ( 22) 00:07:35.759 9931.225 - 9981.637: 98.3297% ( 18) 00:07:35.759 9981.637 - 10032.049: 98.4076% ( 13) 00:07:35.759 10032.049 - 10082.462: 98.4614% ( 9) 00:07:35.759 10082.462 - 10132.874: 98.5034% ( 7) 00:07:35.759 10132.874 - 10183.286: 98.5512% ( 8) 00:07:35.759 10183.286 - 10233.698: 98.6051% ( 9) 00:07:35.759 10233.698 - 10284.111: 98.6530% ( 8) 00:07:35.759 10284.111 - 10334.523: 98.6949% ( 7) 00:07:35.759 10334.523 - 10384.935: 98.7368% ( 7) 00:07:35.759 10384.935 - 10435.348: 98.7847% ( 8) 00:07:35.759 10435.348 - 10485.760: 98.8326% ( 8) 00:07:35.759 10485.760 - 10536.172: 98.8506% ( 3) 00:07:35.759 11393.182 - 11443.594: 98.8566% ( 1) 00:07:35.759 11443.594 - 11494.006: 98.8805% ( 4) 00:07:35.759 11494.006 - 11544.418: 98.9045% ( 4) 00:07:35.759 11544.418 - 11594.831: 98.9224% ( 3) 00:07:35.759 11594.831 - 11645.243: 98.9464% ( 4) 00:07:35.759 11645.243 - 11695.655: 98.9643% ( 3) 00:07:35.759 11695.655 - 11746.068: 98.9883% ( 4) 00:07:35.759 11746.068 - 11796.480: 99.0122% ( 4) 00:07:35.759 11796.480 - 11846.892: 99.0302% ( 3) 00:07:35.759 11846.892 - 11897.305: 99.0481% ( 3) 00:07:35.759 11897.305 - 11947.717: 99.0721% ( 4) 00:07:35.759 11947.717 - 11998.129: 99.0960% ( 4) 00:07:35.759 11998.129 - 12048.542: 99.1140% ( 3) 00:07:35.759 12048.542 - 12098.954: 99.1379% ( 4) 00:07:35.759 12098.954 - 12149.366: 99.1559% ( 3) 00:07:35.759 12149.366 - 12199.778: 99.1798% ( 4) 00:07:35.759 12199.778 - 12250.191: 99.1978% ( 3) 00:07:35.759 12250.191 - 12300.603: 99.2217% ( 4) 00:07:35.759 12300.603 - 12351.015: 99.2337% ( 2) 00:07:35.759 21374.818 - 21475.643: 99.2517% ( 3) 00:07:35.759 21475.643 - 21576.468: 99.2756% ( 4) 00:07:35.759 21576.468 - 21677.292: 99.2996% ( 4) 00:07:35.759 21677.292 - 21778.117: 99.3175% ( 3) 00:07:35.759 21778.117 - 21878.942: 99.3415% ( 4) 00:07:35.759 21878.942 - 21979.766: 99.3654% ( 4) 00:07:35.759 21979.766 - 22080.591: 99.3834% ( 3) 00:07:35.759 22080.591 - 22181.415: 99.4073% ( 4) 00:07:35.759 22181.415 - 22282.240: 99.4313% ( 4) 00:07:35.759 22282.240 - 22383.065: 99.4492% ( 3) 00:07:35.759 22383.065 - 22483.889: 99.4732% ( 4) 00:07:35.759 22483.889 - 22584.714: 99.4971% ( 4) 00:07:35.759 22584.714 - 22685.538: 99.5151% ( 3) 00:07:35.759 22685.538 - 22786.363: 99.5390% ( 4) 00:07:35.759 22786.363 - 22887.188: 99.5630% ( 4) 00:07:35.759 22887.188 - 22988.012: 99.5869% ( 4) 00:07:35.759 22988.012 - 23088.837: 99.6109% ( 4) 00:07:35.759 23088.837 - 23189.662: 99.6169% ( 1) 00:07:35.759 26819.348 - 27020.997: 99.6228% ( 1) 00:07:35.759 27020.997 - 27222.646: 99.6648% ( 7) 00:07:35.759 27222.646 - 27424.295: 99.7067% ( 7) 00:07:35.759 27424.295 - 27625.945: 99.7545% ( 8) 00:07:35.759 27625.945 - 27827.594: 99.7965% ( 7) 00:07:35.759 27827.594 - 28029.243: 99.8443% ( 8) 00:07:35.759 28029.243 - 28230.892: 99.8863% ( 7) 00:07:35.759 28230.892 - 28432.542: 99.9341% ( 8) 00:07:35.759 28432.542 - 28634.191: 99.9820% ( 8) 00:07:35.759 28634.191 - 28835.840: 100.0000% ( 3) 00:07:35.759 00:07:35.759 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:35.759 ============================================================================== 00:07:35.759 Range in us Cumulative IO count 00:07:35.759 5999.065 - 6024.271: 0.0060% ( 1) 00:07:35.759 6024.271 - 6049.477: 0.0119% ( 1) 00:07:35.759 6049.477 - 6074.683: 0.0417% ( 5) 00:07:35.759 6074.683 - 6099.889: 0.1133% ( 12) 00:07:35.759 6099.889 - 6125.095: 0.1729% ( 10) 00:07:35.759 6125.095 - 6150.302: 0.2445% ( 12) 00:07:35.759 6150.302 - 6175.508: 0.3817% ( 23) 00:07:35.759 6175.508 - 6200.714: 0.5487% ( 28) 00:07:35.759 6200.714 - 6225.920: 0.7693% ( 37) 00:07:35.759 6225.920 - 6251.126: 1.2106% ( 74) 00:07:35.759 6251.126 - 6276.332: 1.7116% ( 84) 00:07:35.759 6276.332 - 6301.538: 2.2185% ( 85) 00:07:35.759 6301.538 - 6326.745: 3.1369% ( 154) 00:07:35.759 6326.745 - 6351.951: 4.5026% ( 229) 00:07:35.759 6351.951 - 6377.157: 5.8087% ( 219) 00:07:35.759 6377.157 - 6402.363: 7.3354% ( 256) 00:07:35.759 6402.363 - 6427.569: 8.8502% ( 254) 00:07:35.759 6427.569 - 6452.775: 11.5697% ( 456) 00:07:35.759 6452.775 - 6503.188: 16.3108% ( 795) 00:07:35.759 6503.188 - 6553.600: 22.5131% ( 1040) 00:07:35.759 6553.600 - 6604.012: 27.1231% ( 773) 00:07:35.759 6604.012 - 6654.425: 30.5761% ( 579) 00:07:35.759 6654.425 - 6704.837: 33.5878% ( 505) 00:07:35.759 6704.837 - 6755.249: 36.2894% ( 453) 00:07:35.759 6755.249 - 6805.662: 37.9115% ( 272) 00:07:35.759 6805.662 - 6856.074: 39.4323% ( 255) 00:07:35.759 6856.074 - 6906.486: 40.4282% ( 167) 00:07:35.759 6906.486 - 6956.898: 41.1379% ( 119) 00:07:35.759 6956.898 - 7007.311: 42.2233% ( 182) 00:07:35.759 7007.311 - 7057.723: 42.8614% ( 107) 00:07:35.759 7057.723 - 7108.135: 43.3981% ( 90) 00:07:35.759 7108.135 - 7158.548: 43.9707% ( 96) 00:07:35.759 7158.548 - 7208.960: 44.6326% ( 111) 00:07:35.759 7208.960 - 7259.372: 45.5212% ( 149) 00:07:35.759 7259.372 - 7309.785: 46.0818% ( 94) 00:07:35.759 7309.785 - 7360.197: 46.5231% ( 74) 00:07:35.759 7360.197 - 7410.609: 46.9943% ( 79) 00:07:35.759 7410.609 - 7461.022: 47.5549% ( 94) 00:07:35.759 7461.022 - 7511.434: 48.0439% ( 82) 00:07:35.759 7511.434 - 7561.846: 48.9742% ( 156) 00:07:35.759 7561.846 - 7612.258: 49.7913% ( 137) 00:07:35.759 7612.258 - 7662.671: 50.7276% ( 157) 00:07:35.759 7662.671 - 7713.083: 52.0694% ( 225) 00:07:35.759 7713.083 - 7763.495: 53.7154% ( 276) 00:07:35.759 7763.495 - 7813.908: 55.3972% ( 282) 00:07:35.759 7813.908 - 7864.320: 57.3891% ( 334) 00:07:35.759 7864.320 - 7914.732: 59.6732% ( 383) 00:07:35.759 7914.732 - 7965.145: 62.1780% ( 420) 00:07:35.759 7965.145 - 8015.557: 65.0942% ( 489) 00:07:35.760 8015.557 - 8065.969: 68.3624% ( 548) 00:07:35.760 8065.969 - 8116.382: 71.4575% ( 519) 00:07:35.760 8116.382 - 8166.794: 74.7555% ( 553) 00:07:35.760 8166.794 - 8217.206: 77.4690% ( 455) 00:07:35.760 8217.206 - 8267.618: 80.0155% ( 427) 00:07:35.760 8267.618 - 8318.031: 82.2340% ( 372) 00:07:35.760 8318.031 - 8368.443: 83.7965% ( 262) 00:07:35.760 8368.443 - 8418.855: 85.2040% ( 236) 00:07:35.760 8418.855 - 8469.268: 86.5100% ( 219) 00:07:35.760 8469.268 - 8519.680: 87.8638% ( 227) 00:07:35.760 8519.680 - 8570.092: 89.0088% ( 192) 00:07:35.760 8570.092 - 8620.505: 89.7483% ( 124) 00:07:35.760 8620.505 - 8670.917: 90.3030% ( 93) 00:07:35.760 8670.917 - 8721.329: 90.6548% ( 59) 00:07:35.760 8721.329 - 8771.742: 90.9530% ( 50) 00:07:35.760 8771.742 - 8822.154: 91.3228% ( 62) 00:07:35.760 8822.154 - 8872.566: 91.7223% ( 67) 00:07:35.760 8872.566 - 8922.978: 92.1040% ( 64) 00:07:35.760 8922.978 - 8973.391: 92.5811% ( 80) 00:07:35.760 8973.391 - 9023.803: 92.9747% ( 66) 00:07:35.760 9023.803 - 9074.215: 93.4816% ( 85) 00:07:35.760 9074.215 - 9124.628: 94.0184% ( 90) 00:07:35.760 9124.628 - 9175.040: 94.3881% ( 62) 00:07:35.760 9175.040 - 9225.452: 94.6923% ( 51) 00:07:35.760 9225.452 - 9275.865: 94.9666% ( 46) 00:07:35.760 9275.865 - 9326.277: 95.3900% ( 71) 00:07:35.760 9326.277 - 9376.689: 95.6286% ( 40) 00:07:35.760 9376.689 - 9427.102: 95.8135% ( 31) 00:07:35.760 9427.102 - 9477.514: 96.2071% ( 66) 00:07:35.760 9477.514 - 9527.926: 96.4039% ( 33) 00:07:35.760 9527.926 - 9578.338: 96.6066% ( 34) 00:07:35.760 9578.338 - 9628.751: 96.7498% ( 24) 00:07:35.760 9628.751 - 9679.163: 96.9048% ( 26) 00:07:35.760 9679.163 - 9729.575: 97.0718% ( 28) 00:07:35.760 9729.575 - 9779.988: 97.3342% ( 44) 00:07:35.760 9779.988 - 9830.400: 97.4952% ( 27) 00:07:35.760 9830.400 - 9880.812: 97.6264% ( 22) 00:07:35.760 9880.812 - 9931.225: 97.9365% ( 52) 00:07:35.760 9931.225 - 9981.637: 98.0856% ( 25) 00:07:35.760 9981.637 - 10032.049: 98.2347% ( 25) 00:07:35.760 10032.049 - 10082.462: 98.3659% ( 22) 00:07:35.760 10082.462 - 10132.874: 98.4494% ( 14) 00:07:35.760 10132.874 - 10183.286: 98.5150% ( 11) 00:07:35.760 10183.286 - 10233.698: 98.5866% ( 12) 00:07:35.760 10233.698 - 10284.111: 98.6641% ( 13) 00:07:35.760 10284.111 - 10334.523: 98.7178% ( 9) 00:07:35.760 10334.523 - 10384.935: 98.7417% ( 4) 00:07:35.760 10384.935 - 10435.348: 98.7715% ( 5) 00:07:35.760 10435.348 - 10485.760: 98.7953% ( 4) 00:07:35.760 10485.760 - 10536.172: 98.8192% ( 4) 00:07:35.760 10536.172 - 10586.585: 98.8371% ( 3) 00:07:35.760 10586.585 - 10636.997: 98.8550% ( 3) 00:07:35.760 11040.295 - 11090.708: 98.8669% ( 2) 00:07:35.760 11090.708 - 11141.120: 98.8907% ( 4) 00:07:35.760 11141.120 - 11191.532: 98.9086% ( 3) 00:07:35.760 11191.532 - 11241.945: 98.9325% ( 4) 00:07:35.760 11241.945 - 11292.357: 98.9504% ( 3) 00:07:35.760 11292.357 - 11342.769: 98.9742% ( 4) 00:07:35.760 11342.769 - 11393.182: 98.9981% ( 4) 00:07:35.760 11393.182 - 11443.594: 99.0160% ( 3) 00:07:35.760 11443.594 - 11494.006: 99.0398% ( 4) 00:07:35.760 11494.006 - 11544.418: 99.0577% ( 3) 00:07:35.760 11544.418 - 11594.831: 99.0816% ( 4) 00:07:35.760 11594.831 - 11645.243: 99.0995% ( 3) 00:07:35.760 11645.243 - 11695.655: 99.1233% ( 4) 00:07:35.760 11695.655 - 11746.068: 99.1472% ( 4) 00:07:35.760 11746.068 - 11796.480: 99.1651% ( 3) 00:07:35.760 11796.480 - 11846.892: 99.1889% ( 4) 00:07:35.760 11846.892 - 11897.305: 99.2128% ( 4) 00:07:35.760 11897.305 - 11947.717: 99.2307% ( 3) 00:07:35.760 11947.717 - 11998.129: 99.2366% ( 1) 00:07:35.760 15426.166 - 15526.991: 99.2486% ( 2) 00:07:35.760 15526.991 - 15627.815: 99.2724% ( 4) 00:07:35.760 15627.815 - 15728.640: 99.2963% ( 4) 00:07:35.760 15728.640 - 15829.465: 99.3201% ( 4) 00:07:35.760 15829.465 - 15930.289: 99.3440% ( 4) 00:07:35.760 15930.289 - 16031.114: 99.3738% ( 5) 00:07:35.760 16031.114 - 16131.938: 99.3977% ( 4) 00:07:35.760 16131.938 - 16232.763: 99.4156% ( 3) 00:07:35.760 16232.763 - 16333.588: 99.4454% ( 5) 00:07:35.760 16333.588 - 16434.412: 99.4692% ( 4) 00:07:35.760 16434.412 - 16535.237: 99.4931% ( 4) 00:07:35.760 16535.237 - 16636.062: 99.5169% ( 4) 00:07:35.760 16636.062 - 16736.886: 99.5408% ( 4) 00:07:35.760 16736.886 - 16837.711: 99.5646% ( 4) 00:07:35.760 16837.711 - 16938.535: 99.5945% ( 5) 00:07:35.760 16938.535 - 17039.360: 99.6183% ( 4) 00:07:35.760 20870.695 - 20971.520: 99.6362% ( 3) 00:07:35.760 20971.520 - 21072.345: 99.6541% ( 3) 00:07:35.760 21072.345 - 21173.169: 99.6780% ( 4) 00:07:35.760 21173.169 - 21273.994: 99.7018% ( 4) 00:07:35.760 21273.994 - 21374.818: 99.7257% ( 4) 00:07:35.760 21374.818 - 21475.643: 99.7495% ( 4) 00:07:35.760 21475.643 - 21576.468: 99.7674% ( 3) 00:07:35.760 21576.468 - 21677.292: 99.7913% ( 4) 00:07:35.760 21677.292 - 21778.117: 99.8092% ( 3) 00:07:35.760 21778.117 - 21878.942: 99.8330% ( 4) 00:07:35.760 21878.942 - 21979.766: 99.8569% ( 4) 00:07:35.760 21979.766 - 22080.591: 99.8807% ( 4) 00:07:35.760 22080.591 - 22181.415: 99.9046% ( 4) 00:07:35.760 22181.415 - 22282.240: 99.9284% ( 4) 00:07:35.760 22282.240 - 22383.065: 99.9463% ( 3) 00:07:35.760 22383.065 - 22483.889: 99.9702% ( 4) 00:07:35.760 22483.889 - 22584.714: 99.9940% ( 4) 00:07:35.760 22584.714 - 22685.538: 100.0000% ( 1) 00:07:35.760 00:07:35.760 12:19:42 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:35.760 00:07:35.760 real 0m2.500s 00:07:35.760 user 0m2.208s 00:07:35.760 sys 0m0.194s 00:07:35.760 12:19:42 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.760 12:19:42 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:35.760 ************************************ 00:07:35.760 END TEST nvme_perf 00:07:35.760 ************************************ 00:07:35.760 12:19:42 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:35.760 12:19:42 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:35.760 12:19:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.760 12:19:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:35.760 ************************************ 00:07:35.760 START TEST nvme_hello_world 00:07:35.760 ************************************ 00:07:35.760 12:19:42 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:35.760 Initializing NVMe Controllers 00:07:35.760 Attached to 0000:00:10.0 00:07:35.760 Namespace ID: 1 size: 6GB 00:07:35.760 Attached to 0000:00:11.0 00:07:35.760 Namespace ID: 1 size: 5GB 00:07:35.760 Attached to 0000:00:13.0 00:07:35.760 Namespace ID: 1 size: 1GB 00:07:35.760 Attached to 0000:00:12.0 00:07:35.760 Namespace ID: 1 size: 4GB 00:07:35.760 Namespace ID: 2 size: 4GB 00:07:35.760 Namespace ID: 3 size: 4GB 00:07:35.760 Initialization complete. 00:07:35.760 INFO: using host memory buffer for IO 00:07:35.760 Hello world! 00:07:35.760 INFO: using host memory buffer for IO 00:07:35.760 Hello world! 00:07:35.760 INFO: using host memory buffer for IO 00:07:35.760 Hello world! 00:07:35.760 INFO: using host memory buffer for IO 00:07:35.760 Hello world! 00:07:35.760 INFO: using host memory buffer for IO 00:07:35.760 Hello world! 00:07:35.760 INFO: using host memory buffer for IO 00:07:35.760 Hello world! 00:07:35.760 00:07:35.760 real 0m0.196s 00:07:35.760 user 0m0.080s 00:07:35.760 sys 0m0.086s 00:07:35.760 12:19:42 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.760 12:19:42 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:35.760 ************************************ 00:07:35.760 END TEST nvme_hello_world 00:07:35.760 ************************************ 00:07:35.760 12:19:42 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:35.760 12:19:42 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.760 12:19:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.760 12:19:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:35.760 ************************************ 00:07:35.760 START TEST nvme_sgl 00:07:35.760 ************************************ 00:07:35.760 12:19:42 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:36.018 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:36.018 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:36.018 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:36.018 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:36.018 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:36.018 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:36.018 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:36.018 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:36.018 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:36.018 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:36.018 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:36.018 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:36.018 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:36.018 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:36.018 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:36.018 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:36.018 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:36.018 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:36.018 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:36.018 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:36.018 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:36.018 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:36.018 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:36.018 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:36.018 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:36.018 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:36.018 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:36.018 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:36.018 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:36.018 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:36.018 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:36.018 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:36.018 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:36.018 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:36.018 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:36.018 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:36.018 NVMe Readv/Writev Request test 00:07:36.018 Attached to 0000:00:10.0 00:07:36.018 Attached to 0000:00:11.0 00:07:36.018 Attached to 0000:00:13.0 00:07:36.018 Attached to 0000:00:12.0 00:07:36.018 0000:00:10.0: build_io_request_2 test passed 00:07:36.018 0000:00:10.0: build_io_request_4 test passed 00:07:36.018 0000:00:10.0: build_io_request_5 test passed 00:07:36.018 0000:00:10.0: build_io_request_6 test passed 00:07:36.018 0000:00:10.0: build_io_request_7 test passed 00:07:36.018 0000:00:10.0: build_io_request_10 test passed 00:07:36.018 0000:00:11.0: build_io_request_2 test passed 00:07:36.018 0000:00:11.0: build_io_request_4 test passed 00:07:36.018 0000:00:11.0: build_io_request_5 test passed 00:07:36.018 0000:00:11.0: build_io_request_6 test passed 00:07:36.018 0000:00:11.0: build_io_request_7 test passed 00:07:36.018 0000:00:11.0: build_io_request_10 test passed 00:07:36.018 Cleaning up... 00:07:36.018 00:07:36.018 real 0m0.274s 00:07:36.018 user 0m0.140s 00:07:36.018 sys 0m0.089s 00:07:36.018 12:19:43 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.018 ************************************ 00:07:36.018 END TEST nvme_sgl 00:07:36.018 ************************************ 00:07:36.018 12:19:43 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:36.018 12:19:43 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:36.018 12:19:43 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.018 12:19:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.018 12:19:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.018 ************************************ 00:07:36.018 START TEST nvme_e2edp 00:07:36.018 ************************************ 00:07:36.018 12:19:43 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:36.276 NVMe Write/Read with End-to-End data protection test 00:07:36.276 Attached to 0000:00:10.0 00:07:36.276 Attached to 0000:00:11.0 00:07:36.276 Attached to 0000:00:13.0 00:07:36.276 Attached to 0000:00:12.0 00:07:36.276 Cleaning up... 00:07:36.276 00:07:36.276 real 0m0.210s 00:07:36.276 user 0m0.058s 00:07:36.276 sys 0m0.107s 00:07:36.276 12:19:43 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.276 12:19:43 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:36.276 ************************************ 00:07:36.276 END TEST nvme_e2edp 00:07:36.276 ************************************ 00:07:36.276 12:19:43 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:36.276 12:19:43 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.276 12:19:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.276 12:19:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.276 ************************************ 00:07:36.276 START TEST nvme_reserve 00:07:36.276 ************************************ 00:07:36.276 12:19:43 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:36.535 ===================================================== 00:07:36.535 NVMe Controller at PCI bus 0, device 16, function 0 00:07:36.535 ===================================================== 00:07:36.535 Reservations: Not Supported 00:07:36.535 ===================================================== 00:07:36.535 NVMe Controller at PCI bus 0, device 17, function 0 00:07:36.535 ===================================================== 00:07:36.535 Reservations: Not Supported 00:07:36.535 ===================================================== 00:07:36.535 NVMe Controller at PCI bus 0, device 19, function 0 00:07:36.535 ===================================================== 00:07:36.535 Reservations: Not Supported 00:07:36.535 ===================================================== 00:07:36.535 NVMe Controller at PCI bus 0, device 18, function 0 00:07:36.535 ===================================================== 00:07:36.535 Reservations: Not Supported 00:07:36.535 Reservation test passed 00:07:36.535 00:07:36.535 real 0m0.200s 00:07:36.535 user 0m0.072s 00:07:36.535 sys 0m0.083s 00:07:36.535 12:19:43 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.535 ************************************ 00:07:36.535 END TEST nvme_reserve 00:07:36.535 12:19:43 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:36.535 ************************************ 00:07:36.535 12:19:43 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:36.535 12:19:43 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.535 12:19:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.535 12:19:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.535 ************************************ 00:07:36.535 START TEST nvme_err_injection 00:07:36.535 ************************************ 00:07:36.535 12:19:43 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:36.810 NVMe Error Injection test 00:07:36.810 Attached to 0000:00:10.0 00:07:36.810 Attached to 0000:00:11.0 00:07:36.810 Attached to 0000:00:13.0 00:07:36.810 Attached to 0000:00:12.0 00:07:36.810 0000:00:10.0: get features failed as expected 00:07:36.810 0000:00:11.0: get features failed as expected 00:07:36.810 0000:00:13.0: get features failed as expected 00:07:36.810 0000:00:12.0: get features failed as expected 00:07:36.810 0000:00:10.0: get features successfully as expected 00:07:36.810 0000:00:11.0: get features successfully as expected 00:07:36.810 0000:00:13.0: get features successfully as expected 00:07:36.810 0000:00:12.0: get features successfully as expected 00:07:36.810 0000:00:10.0: read failed as expected 00:07:36.810 0000:00:11.0: read failed as expected 00:07:36.810 0000:00:13.0: read failed as expected 00:07:36.810 0000:00:12.0: read failed as expected 00:07:36.810 0000:00:10.0: read successfully as expected 00:07:36.810 0000:00:11.0: read successfully as expected 00:07:36.810 0000:00:13.0: read successfully as expected 00:07:36.810 0000:00:12.0: read successfully as expected 00:07:36.810 Cleaning up... 00:07:36.810 00:07:36.810 real 0m0.210s 00:07:36.810 user 0m0.089s 00:07:36.810 sys 0m0.090s 00:07:36.810 12:19:43 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.810 12:19:43 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:36.810 ************************************ 00:07:36.810 END TEST nvme_err_injection 00:07:36.810 ************************************ 00:07:36.810 12:19:43 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:36.810 12:19:43 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:36.810 12:19:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.810 12:19:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.810 ************************************ 00:07:36.810 START TEST nvme_overhead 00:07:36.810 ************************************ 00:07:36.810 12:19:43 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:38.193 Initializing NVMe Controllers 00:07:38.193 Attached to 0000:00:10.0 00:07:38.193 Attached to 0000:00:11.0 00:07:38.193 Attached to 0000:00:13.0 00:07:38.193 Attached to 0000:00:12.0 00:07:38.193 Initialization complete. Launching workers. 00:07:38.193 submit (in ns) avg, min, max = 11378.6, 10304.6, 63286.9 00:07:38.193 complete (in ns) avg, min, max = 7602.4, 7167.7, 304530.0 00:07:38.193 00:07:38.193 Submit histogram 00:07:38.193 ================ 00:07:38.193 Range in us Cumulative Count 00:07:38.193 10.289 - 10.338: 0.0055% ( 1) 00:07:38.193 10.437 - 10.486: 0.0164% ( 2) 00:07:38.193 10.732 - 10.782: 0.0219% ( 1) 00:07:38.193 10.782 - 10.831: 0.0274% ( 1) 00:07:38.193 10.831 - 10.880: 0.0932% ( 12) 00:07:38.193 10.880 - 10.929: 0.8659% ( 141) 00:07:38.193 10.929 - 10.978: 4.4996% ( 663) 00:07:38.193 10.978 - 11.028: 13.6194% ( 1664) 00:07:38.193 11.028 - 11.077: 28.4501% ( 2706) 00:07:38.193 11.077 - 11.126: 46.1416% ( 3228) 00:07:38.193 11.126 - 11.175: 62.0081% ( 2895) 00:07:38.193 11.175 - 11.225: 72.9694% ( 2000) 00:07:38.193 11.225 - 11.274: 79.5133% ( 1194) 00:07:38.193 11.274 - 11.323: 83.4320% ( 715) 00:07:38.193 11.323 - 11.372: 85.5749% ( 391) 00:07:38.193 11.372 - 11.422: 87.1698% ( 291) 00:07:38.193 11.422 - 11.471: 88.1234% ( 174) 00:07:38.193 11.471 - 11.520: 88.7975% ( 123) 00:07:38.193 11.520 - 11.569: 89.2744% ( 87) 00:07:38.193 11.569 - 11.618: 89.6306% ( 65) 00:07:38.193 11.618 - 11.668: 89.8553% ( 41) 00:07:38.193 11.668 - 11.717: 90.1513% ( 54) 00:07:38.193 11.717 - 11.766: 90.3650% ( 39) 00:07:38.193 11.766 - 11.815: 90.5185% ( 28) 00:07:38.193 11.815 - 11.865: 90.6939% ( 32) 00:07:38.193 11.865 - 11.914: 90.9405% ( 45) 00:07:38.193 11.914 - 11.963: 91.3077% ( 67) 00:07:38.193 11.963 - 12.012: 91.6365% ( 60) 00:07:38.193 12.012 - 12.062: 91.9763% ( 62) 00:07:38.193 12.062 - 12.111: 92.4641% ( 89) 00:07:38.193 12.111 - 12.160: 93.1930% ( 133) 00:07:38.193 12.160 - 12.209: 93.9494% ( 138) 00:07:38.193 12.209 - 12.258: 94.6673% ( 131) 00:07:38.193 12.258 - 12.308: 95.2866% ( 113) 00:07:38.193 12.308 - 12.357: 95.7689% ( 88) 00:07:38.193 12.357 - 12.406: 96.1033% ( 61) 00:07:38.193 12.406 - 12.455: 96.3334% ( 42) 00:07:38.193 12.455 - 12.505: 96.5417% ( 38) 00:07:38.193 12.505 - 12.554: 96.6404% ( 18) 00:07:38.193 12.554 - 12.603: 96.7061% ( 12) 00:07:38.193 12.603 - 12.702: 96.8212% ( 21) 00:07:38.193 12.702 - 12.800: 96.8651% ( 8) 00:07:38.193 12.800 - 12.898: 96.9144% ( 9) 00:07:38.193 12.898 - 12.997: 96.9418% ( 5) 00:07:38.193 12.997 - 13.095: 97.0240% ( 15) 00:07:38.193 13.095 - 13.194: 97.1446% ( 22) 00:07:38.193 13.194 - 13.292: 97.2487% ( 19) 00:07:38.193 13.292 - 13.391: 97.3583% ( 20) 00:07:38.193 13.391 - 13.489: 97.4625% ( 19) 00:07:38.193 13.489 - 13.588: 97.5940% ( 24) 00:07:38.193 13.588 - 13.686: 97.6762% ( 15) 00:07:38.193 13.686 - 13.785: 97.7529% ( 14) 00:07:38.193 13.785 - 13.883: 97.7749% ( 4) 00:07:38.193 13.883 - 13.982: 97.8297% ( 10) 00:07:38.193 13.982 - 14.080: 97.8571% ( 5) 00:07:38.193 14.080 - 14.178: 97.8845% ( 5) 00:07:38.193 14.178 - 14.277: 97.9283% ( 8) 00:07:38.193 14.277 - 14.375: 97.9776% ( 9) 00:07:38.193 14.375 - 14.474: 98.0105% ( 6) 00:07:38.193 14.474 - 14.572: 98.0324% ( 4) 00:07:38.193 14.572 - 14.671: 98.0763% ( 8) 00:07:38.193 14.671 - 14.769: 98.0927% ( 3) 00:07:38.193 14.769 - 14.868: 98.1147% ( 4) 00:07:38.193 14.868 - 14.966: 98.1421% ( 5) 00:07:38.193 14.966 - 15.065: 98.1695% ( 5) 00:07:38.193 15.065 - 15.163: 98.1969% ( 5) 00:07:38.193 15.163 - 15.262: 98.2297% ( 6) 00:07:38.193 15.262 - 15.360: 98.2626% ( 6) 00:07:38.193 15.360 - 15.458: 98.2955% ( 6) 00:07:38.193 15.458 - 15.557: 98.3394% ( 8) 00:07:38.193 15.557 - 15.655: 98.3722% ( 6) 00:07:38.193 15.655 - 15.754: 98.3832% ( 2) 00:07:38.193 15.852 - 15.951: 98.4106% ( 5) 00:07:38.193 15.951 - 16.049: 98.4380% ( 5) 00:07:38.193 16.148 - 16.246: 98.4545% ( 3) 00:07:38.193 16.246 - 16.345: 98.4599% ( 1) 00:07:38.193 16.345 - 16.443: 98.4873% ( 5) 00:07:38.193 16.443 - 16.542: 98.5312% ( 8) 00:07:38.193 16.542 - 16.640: 98.5860% ( 10) 00:07:38.193 16.640 - 16.738: 98.6956% ( 20) 00:07:38.193 16.738 - 16.837: 98.7394% ( 8) 00:07:38.193 16.837 - 16.935: 98.7669% ( 5) 00:07:38.193 16.935 - 17.034: 98.8545% ( 16) 00:07:38.193 17.034 - 17.132: 98.9203% ( 12) 00:07:38.193 17.132 - 17.231: 98.9642% ( 8) 00:07:38.193 17.231 - 17.329: 99.0354% ( 13) 00:07:38.193 17.329 - 17.428: 99.1176% ( 15) 00:07:38.193 17.428 - 17.526: 99.1834% ( 12) 00:07:38.193 17.526 - 17.625: 99.2820% ( 18) 00:07:38.193 17.625 - 17.723: 99.3368% ( 10) 00:07:38.193 17.723 - 17.822: 99.3807% ( 8) 00:07:38.193 17.822 - 17.920: 99.4081% ( 5) 00:07:38.193 17.920 - 18.018: 99.4465% ( 7) 00:07:38.193 18.018 - 18.117: 99.4574% ( 2) 00:07:38.193 18.117 - 18.215: 99.5013% ( 8) 00:07:38.193 18.215 - 18.314: 99.5506% ( 9) 00:07:38.193 18.314 - 18.412: 99.5780% ( 5) 00:07:38.193 18.412 - 18.511: 99.6109% ( 6) 00:07:38.193 18.511 - 18.609: 99.6164% ( 1) 00:07:38.193 18.609 - 18.708: 99.6383% ( 4) 00:07:38.193 18.708 - 18.806: 99.6547% ( 3) 00:07:38.193 18.806 - 18.905: 99.6657% ( 2) 00:07:38.193 18.905 - 19.003: 99.6766% ( 2) 00:07:38.193 19.003 - 19.102: 99.6986% ( 4) 00:07:38.193 19.102 - 19.200: 99.7040% ( 1) 00:07:38.193 19.200 - 19.298: 99.7205% ( 3) 00:07:38.193 19.298 - 19.397: 99.7369% ( 3) 00:07:38.193 19.594 - 19.692: 99.7424% ( 1) 00:07:38.193 19.791 - 19.889: 99.7479% ( 1) 00:07:38.193 19.988 - 20.086: 99.7589% ( 2) 00:07:38.193 20.086 - 20.185: 99.7698% ( 2) 00:07:38.193 20.283 - 20.382: 99.7808% ( 2) 00:07:38.193 20.480 - 20.578: 99.7863% ( 1) 00:07:38.193 20.578 - 20.677: 99.7917% ( 1) 00:07:38.193 20.874 - 20.972: 99.8027% ( 2) 00:07:38.193 20.972 - 21.071: 99.8082% ( 1) 00:07:38.193 21.268 - 21.366: 99.8191% ( 2) 00:07:38.193 21.366 - 21.465: 99.8246% ( 1) 00:07:38.193 21.465 - 21.563: 99.8356% ( 2) 00:07:38.193 21.563 - 21.662: 99.8411% ( 1) 00:07:38.193 21.662 - 21.760: 99.8465% ( 1) 00:07:38.193 22.154 - 22.252: 99.8520% ( 1) 00:07:38.193 22.351 - 22.449: 99.8630% ( 2) 00:07:38.193 22.843 - 22.942: 99.8685% ( 1) 00:07:38.193 22.942 - 23.040: 99.8739% ( 1) 00:07:38.193 23.237 - 23.335: 99.8794% ( 1) 00:07:38.193 23.434 - 23.532: 99.8904% ( 2) 00:07:38.193 23.532 - 23.631: 99.9013% ( 2) 00:07:38.193 23.729 - 23.828: 99.9068% ( 1) 00:07:38.193 24.123 - 24.222: 99.9123% ( 1) 00:07:38.193 24.222 - 24.320: 99.9178% ( 1) 00:07:38.193 24.517 - 24.615: 99.9233% ( 1) 00:07:38.193 24.615 - 24.714: 99.9288% ( 1) 00:07:38.193 26.191 - 26.388: 99.9397% ( 2) 00:07:38.193 29.932 - 30.129: 99.9452% ( 1) 00:07:38.193 34.068 - 34.265: 99.9507% ( 1) 00:07:38.193 36.234 - 36.431: 99.9562% ( 1) 00:07:38.193 38.203 - 38.400: 99.9616% ( 1) 00:07:38.193 38.597 - 38.794: 99.9671% ( 1) 00:07:38.193 39.582 - 39.778: 99.9726% ( 1) 00:07:38.193 47.655 - 47.852: 99.9781% ( 1) 00:07:38.193 49.034 - 49.231: 99.9836% ( 1) 00:07:38.193 51.988 - 52.382: 99.9890% ( 1) 00:07:38.193 52.775 - 53.169: 99.9945% ( 1) 00:07:38.193 63.015 - 63.409: 100.0000% ( 1) 00:07:38.193 00:07:38.193 Complete histogram 00:07:38.193 ================== 00:07:38.193 Range in us Cumulative Count 00:07:38.193 7.138 - 7.188: 0.0329% ( 6) 00:07:38.193 7.188 - 7.237: 0.5919% ( 102) 00:07:38.193 7.237 - 7.286: 4.8011% ( 768) 00:07:38.193 7.286 - 7.335: 20.8977% ( 2937) 00:07:38.193 7.335 - 7.385: 47.1994% ( 4799) 00:07:38.193 7.385 - 7.434: 69.4180% ( 4054) 00:07:38.193 7.434 - 7.483: 81.4754% ( 2200) 00:07:38.193 7.483 - 7.532: 87.5041% ( 1100) 00:07:38.193 7.532 - 7.582: 90.8638% ( 613) 00:07:38.193 7.582 - 7.631: 93.0505% ( 399) 00:07:38.193 7.631 - 7.680: 94.3111% ( 230) 00:07:38.193 7.680 - 7.729: 94.8811% ( 104) 00:07:38.193 7.729 - 7.778: 95.2483% ( 67) 00:07:38.193 7.778 - 7.828: 95.4346% ( 34) 00:07:38.193 7.828 - 7.877: 95.5662% ( 24) 00:07:38.193 7.877 - 7.926: 95.7196% ( 28) 00:07:38.193 7.926 - 7.975: 95.8785% ( 29) 00:07:38.193 7.975 - 8.025: 96.0978% ( 40) 00:07:38.193 8.025 - 8.074: 96.5198% ( 77) 00:07:38.193 8.074 - 8.123: 96.9418% ( 77) 00:07:38.193 8.123 - 8.172: 97.2432% ( 55) 00:07:38.193 8.172 - 8.222: 97.4625% ( 40) 00:07:38.193 8.222 - 8.271: 97.5556% ( 17) 00:07:38.193 8.271 - 8.320: 97.6050% ( 9) 00:07:38.193 8.320 - 8.369: 97.6378% ( 6) 00:07:38.193 8.369 - 8.418: 97.6817% ( 8) 00:07:38.193 8.418 - 8.468: 97.6981% ( 3) 00:07:38.193 8.468 - 8.517: 97.7255% ( 5) 00:07:38.193 8.517 - 8.566: 97.7584% ( 6) 00:07:38.193 8.615 - 8.665: 97.7639% ( 1) 00:07:38.193 8.763 - 8.812: 97.7803% ( 3) 00:07:38.193 8.862 - 8.911: 97.7858% ( 1) 00:07:38.193 8.911 - 8.960: 97.7968% ( 2) 00:07:38.193 8.960 - 9.009: 97.8023% ( 1) 00:07:38.193 9.058 - 9.108: 97.8077% ( 1) 00:07:38.193 9.108 - 9.157: 97.8132% ( 1) 00:07:38.193 9.206 - 9.255: 97.8187% ( 1) 00:07:38.193 9.354 - 9.403: 97.8242% ( 1) 00:07:38.193 9.502 - 9.551: 97.8406% ( 3) 00:07:38.194 9.551 - 9.600: 97.8516% ( 2) 00:07:38.194 9.649 - 9.698: 97.8571% ( 1) 00:07:38.194 9.748 - 9.797: 97.8680% ( 2) 00:07:38.194 9.797 - 9.846: 97.8899% ( 4) 00:07:38.194 9.846 - 9.895: 97.8954% ( 1) 00:07:38.194 9.994 - 10.043: 97.9174% ( 4) 00:07:38.194 10.043 - 10.092: 97.9228% ( 1) 00:07:38.194 10.092 - 10.142: 97.9393% ( 3) 00:07:38.194 10.142 - 10.191: 97.9612% ( 4) 00:07:38.194 10.191 - 10.240: 97.9667% ( 1) 00:07:38.194 10.240 - 10.289: 97.9776% ( 2) 00:07:38.194 10.289 - 10.338: 98.0050% ( 5) 00:07:38.194 10.338 - 10.388: 98.0270% ( 4) 00:07:38.194 10.388 - 10.437: 98.0324% ( 1) 00:07:38.194 10.535 - 10.585: 98.0489% ( 3) 00:07:38.194 10.585 - 10.634: 98.0544% ( 1) 00:07:38.194 10.634 - 10.683: 98.0653% ( 2) 00:07:38.194 10.683 - 10.732: 98.0763% ( 2) 00:07:38.194 10.732 - 10.782: 98.0873% ( 2) 00:07:38.194 10.782 - 10.831: 98.0982% ( 2) 00:07:38.194 10.831 - 10.880: 98.1037% ( 1) 00:07:38.194 10.880 - 10.929: 98.1092% ( 1) 00:07:38.194 11.028 - 11.077: 98.1147% ( 1) 00:07:38.194 11.126 - 11.175: 98.1311% ( 3) 00:07:38.194 11.175 - 11.225: 98.1366% ( 1) 00:07:38.194 11.225 - 11.274: 98.1475% ( 2) 00:07:38.194 11.274 - 11.323: 98.1530% ( 1) 00:07:38.194 11.372 - 11.422: 98.1585% ( 1) 00:07:38.194 11.422 - 11.471: 98.1640% ( 1) 00:07:38.194 11.618 - 11.668: 98.1695% ( 1) 00:07:38.194 11.766 - 11.815: 98.1749% ( 1) 00:07:38.194 11.815 - 11.865: 98.1914% ( 3) 00:07:38.194 11.865 - 11.914: 98.1969% ( 1) 00:07:38.194 12.012 - 12.062: 98.2023% ( 1) 00:07:38.194 12.209 - 12.258: 98.2133% ( 2) 00:07:38.194 12.308 - 12.357: 98.2243% ( 2) 00:07:38.194 12.357 - 12.406: 98.2297% ( 1) 00:07:38.194 12.554 - 12.603: 98.2352% ( 1) 00:07:38.194 12.603 - 12.702: 98.2462% ( 2) 00:07:38.194 12.702 - 12.800: 98.3065% ( 11) 00:07:38.194 12.800 - 12.898: 98.3832% ( 14) 00:07:38.194 12.898 - 12.997: 98.4435% ( 11) 00:07:38.194 12.997 - 13.095: 98.5038% ( 11) 00:07:38.194 13.095 - 13.194: 98.5805% ( 14) 00:07:38.194 13.194 - 13.292: 98.6572% ( 14) 00:07:38.194 13.292 - 13.391: 98.7504% ( 17) 00:07:38.194 13.391 - 13.489: 98.8381% ( 16) 00:07:38.194 13.489 - 13.588: 98.9422% ( 19) 00:07:38.194 13.588 - 13.686: 99.0354% ( 17) 00:07:38.194 13.686 - 13.785: 99.1176% ( 15) 00:07:38.194 13.785 - 13.883: 99.1779% ( 11) 00:07:38.194 13.883 - 13.982: 99.2217% ( 8) 00:07:38.194 13.982 - 14.080: 99.3204% ( 18) 00:07:38.194 14.080 - 14.178: 99.3862% ( 12) 00:07:38.194 14.178 - 14.277: 99.4355% ( 9) 00:07:38.194 14.277 - 14.375: 99.4958% ( 11) 00:07:38.194 14.375 - 14.474: 99.5122% ( 3) 00:07:38.194 14.474 - 14.572: 99.5287% ( 3) 00:07:38.194 14.572 - 14.671: 99.5615% ( 6) 00:07:38.194 14.671 - 14.769: 99.5944% ( 6) 00:07:38.194 14.769 - 14.868: 99.6164% ( 4) 00:07:38.194 14.868 - 14.966: 99.6328% ( 3) 00:07:38.194 15.065 - 15.163: 99.6383% ( 1) 00:07:38.194 15.163 - 15.262: 99.6492% ( 2) 00:07:38.194 15.262 - 15.360: 99.6602% ( 2) 00:07:38.194 15.360 - 15.458: 99.6712% ( 2) 00:07:38.194 15.458 - 15.557: 99.6766% ( 1) 00:07:38.194 15.557 - 15.655: 99.6821% ( 1) 00:07:38.194 15.754 - 15.852: 99.6931% ( 2) 00:07:38.194 15.951 - 16.049: 99.6986% ( 1) 00:07:38.194 16.049 - 16.148: 99.7095% ( 2) 00:07:38.194 16.443 - 16.542: 99.7150% ( 1) 00:07:38.194 16.542 - 16.640: 99.7205% ( 1) 00:07:38.194 16.640 - 16.738: 99.7260% ( 1) 00:07:38.194 16.738 - 16.837: 99.7369% ( 2) 00:07:38.194 16.837 - 16.935: 99.7424% ( 1) 00:07:38.194 17.034 - 17.132: 99.7534% ( 2) 00:07:38.194 17.132 - 17.231: 99.7589% ( 1) 00:07:38.194 17.231 - 17.329: 99.7643% ( 1) 00:07:38.194 17.428 - 17.526: 99.7698% ( 1) 00:07:38.194 17.526 - 17.625: 99.7808% ( 2) 00:07:38.194 17.723 - 17.822: 99.7863% ( 1) 00:07:38.194 17.822 - 17.920: 99.7972% ( 2) 00:07:38.194 18.117 - 18.215: 99.8027% ( 1) 00:07:38.194 18.314 - 18.412: 99.8082% ( 1) 00:07:38.194 18.412 - 18.511: 99.8137% ( 1) 00:07:38.194 18.609 - 18.708: 99.8301% ( 3) 00:07:38.194 18.708 - 18.806: 99.8356% ( 1) 00:07:38.194 18.905 - 19.003: 99.8465% ( 2) 00:07:38.194 19.200 - 19.298: 99.8520% ( 1) 00:07:38.194 19.791 - 19.889: 99.8575% ( 1) 00:07:38.194 20.185 - 20.283: 99.8630% ( 1) 00:07:38.194 20.382 - 20.480: 99.8739% ( 2) 00:07:38.194 20.480 - 20.578: 99.8794% ( 1) 00:07:38.194 21.169 - 21.268: 99.8849% ( 1) 00:07:38.194 21.366 - 21.465: 99.8904% ( 1) 00:07:38.194 21.662 - 21.760: 99.9013% ( 2) 00:07:38.194 21.858 - 21.957: 99.9123% ( 2) 00:07:38.194 22.055 - 22.154: 99.9233% ( 2) 00:07:38.194 22.252 - 22.351: 99.9288% ( 1) 00:07:38.194 22.548 - 22.646: 99.9342% ( 1) 00:07:38.194 23.237 - 23.335: 99.9397% ( 1) 00:07:38.194 25.797 - 25.994: 99.9507% ( 2) 00:07:38.194 25.994 - 26.191: 99.9562% ( 1) 00:07:38.194 28.160 - 28.357: 99.9616% ( 1) 00:07:38.194 30.720 - 30.917: 99.9671% ( 1) 00:07:38.194 30.917 - 31.114: 99.9726% ( 1) 00:07:38.194 37.612 - 37.809: 99.9781% ( 1) 00:07:38.194 40.763 - 40.960: 99.9836% ( 1) 00:07:38.194 159.114 - 159.902: 99.9890% ( 1) 00:07:38.194 304.049 - 305.625: 100.0000% ( 2) 00:07:38.194 00:07:38.194 00:07:38.194 real 0m1.204s 00:07:38.194 user 0m1.078s 00:07:38.194 sys 0m0.085s 00:07:38.194 12:19:45 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.194 ************************************ 00:07:38.194 END TEST nvme_overhead 00:07:38.194 ************************************ 00:07:38.194 12:19:45 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:38.194 12:19:45 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:38.194 12:19:45 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:38.194 12:19:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.194 12:19:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:38.194 ************************************ 00:07:38.194 START TEST nvme_arbitration 00:07:38.194 ************************************ 00:07:38.194 12:19:45 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:41.478 Initializing NVMe Controllers 00:07:41.478 Attached to 0000:00:10.0 00:07:41.478 Attached to 0000:00:11.0 00:07:41.478 Attached to 0000:00:13.0 00:07:41.478 Attached to 0000:00:12.0 00:07:41.478 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:41.478 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:41.478 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:41.478 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:41.478 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:41.478 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:41.478 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:41.478 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:41.478 Initialization complete. Launching workers. 00:07:41.478 Starting thread on core 1 with urgent priority queue 00:07:41.478 Starting thread on core 2 with urgent priority queue 00:07:41.478 Starting thread on core 3 with urgent priority queue 00:07:41.478 Starting thread on core 0 with urgent priority queue 00:07:41.478 QEMU NVMe Ctrl (12340 ) core 0: 938.67 IO/s 106.53 secs/100000 ios 00:07:41.478 QEMU NVMe Ctrl (12342 ) core 0: 938.67 IO/s 106.53 secs/100000 ios 00:07:41.478 QEMU NVMe Ctrl (12341 ) core 1: 1002.67 IO/s 99.73 secs/100000 ios 00:07:41.478 QEMU NVMe Ctrl (12342 ) core 1: 1002.67 IO/s 99.73 secs/100000 ios 00:07:41.478 QEMU NVMe Ctrl (12343 ) core 2: 938.67 IO/s 106.53 secs/100000 ios 00:07:41.479 QEMU NVMe Ctrl (12342 ) core 3: 960.00 IO/s 104.17 secs/100000 ios 00:07:41.479 ======================================================== 00:07:41.479 00:07:41.479 00:07:41.479 real 0m3.308s 00:07:41.479 user 0m9.227s 00:07:41.479 sys 0m0.118s 00:07:41.479 ************************************ 00:07:41.479 END TEST nvme_arbitration 00:07:41.479 12:19:48 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.479 12:19:48 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:41.479 ************************************ 00:07:41.479 12:19:48 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:41.479 12:19:48 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:41.479 12:19:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.479 12:19:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.479 ************************************ 00:07:41.479 START TEST nvme_single_aen 00:07:41.479 ************************************ 00:07:41.479 12:19:48 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:41.740 Asynchronous Event Request test 00:07:41.740 Attached to 0000:00:10.0 00:07:41.740 Attached to 0000:00:11.0 00:07:41.740 Attached to 0000:00:13.0 00:07:41.740 Attached to 0000:00:12.0 00:07:41.740 Reset controller to setup AER completions for this process 00:07:41.740 Registering asynchronous event callbacks... 00:07:41.740 Getting orig temperature thresholds of all controllers 00:07:41.740 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:41.740 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:41.740 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:41.740 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:41.740 Setting all controllers temperature threshold low to trigger AER 00:07:41.740 Waiting for all controllers temperature threshold to be set lower 00:07:41.740 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:41.740 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:41.740 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:41.740 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:41.740 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:41.740 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:41.740 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:41.740 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:41.740 Waiting for all controllers to trigger AER and reset threshold 00:07:41.740 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.740 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.740 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.740 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.740 Cleaning up... 00:07:41.740 00:07:41.740 real 0m0.223s 00:07:41.740 user 0m0.085s 00:07:41.740 sys 0m0.093s 00:07:41.740 ************************************ 00:07:41.740 END TEST nvme_single_aen 00:07:41.740 ************************************ 00:07:41.740 12:19:48 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.740 12:19:48 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:41.740 12:19:48 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:41.740 12:19:48 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.740 12:19:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.740 12:19:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.740 ************************************ 00:07:41.740 START TEST nvme_doorbell_aers 00:07:41.740 ************************************ 00:07:41.740 12:19:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:41.740 12:19:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:41.740 12:19:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:41.740 12:19:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:41.740 12:19:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:41.740 12:19:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:41.740 12:19:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:41.740 12:19:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:41.740 12:19:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:41.740 12:19:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:41.740 12:19:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:41.740 12:19:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:41.740 12:19:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:41.740 12:19:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:42.001 [2024-12-16 12:19:48.989964] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:07:52.003 Executing: test_write_invalid_db 00:07:52.003 Waiting for AER completion... 00:07:52.003 Failure: test_write_invalid_db 00:07:52.003 00:07:52.003 Executing: test_invalid_db_write_overflow_sq 00:07:52.003 Waiting for AER completion... 00:07:52.003 Failure: test_invalid_db_write_overflow_sq 00:07:52.003 00:07:52.003 Executing: test_invalid_db_write_overflow_cq 00:07:52.003 Waiting for AER completion... 00:07:52.003 Failure: test_invalid_db_write_overflow_cq 00:07:52.003 00:07:52.003 12:19:58 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:52.003 12:19:58 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:52.003 [2024-12-16 12:19:59.024537] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:08:01.969 Executing: test_write_invalid_db 00:08:01.969 Waiting for AER completion... 00:08:01.969 Failure: test_write_invalid_db 00:08:01.969 00:08:01.969 Executing: test_invalid_db_write_overflow_sq 00:08:01.969 Waiting for AER completion... 00:08:01.969 Failure: test_invalid_db_write_overflow_sq 00:08:01.969 00:08:01.969 Executing: test_invalid_db_write_overflow_cq 00:08:01.969 Waiting for AER completion... 00:08:01.969 Failure: test_invalid_db_write_overflow_cq 00:08:01.969 00:08:01.969 12:20:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:01.969 12:20:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:01.969 [2024-12-16 12:20:09.053632] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:08:11.936 Executing: test_write_invalid_db 00:08:11.936 Waiting for AER completion... 00:08:11.936 Failure: test_write_invalid_db 00:08:11.936 00:08:11.936 Executing: test_invalid_db_write_overflow_sq 00:08:11.936 Waiting for AER completion... 00:08:11.936 Failure: test_invalid_db_write_overflow_sq 00:08:11.936 00:08:11.936 Executing: test_invalid_db_write_overflow_cq 00:08:11.936 Waiting for AER completion... 00:08:11.936 Failure: test_invalid_db_write_overflow_cq 00:08:11.936 00:08:11.936 12:20:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:11.936 12:20:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:13.558 [2024-12-16 12:20:19.095962] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:08:23.523 Executing: test_write_invalid_db 00:08:23.523 Waiting for AER completion... 00:08:23.523 Failure: test_write_invalid_db 00:08:23.523 00:08:23.523 Executing: test_invalid_db_write_overflow_sq 00:08:23.523 Waiting for AER completion... 00:08:23.523 Failure: test_invalid_db_write_overflow_sq 00:08:23.523 00:08:23.523 Executing: test_invalid_db_write_overflow_cq 00:08:23.523 Waiting for AER completion... 00:08:23.523 Failure: test_invalid_db_write_overflow_cq 00:08:23.523 00:08:23.523 ************************************ 00:08:23.523 END TEST nvme_doorbell_aers 00:08:23.523 ************************************ 00:08:23.523 00:08:23.523 real 0m40.179s 00:08:23.523 user 0m34.113s 00:08:23.523 sys 0m5.685s 00:08:23.523 12:20:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.523 12:20:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:23.523 12:20:28 nvme -- nvme/nvme.sh@97 -- # uname 00:08:23.523 12:20:28 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:23.523 12:20:28 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:23.523 12:20:28 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:23.523 12:20:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.523 12:20:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.523 ************************************ 00:08:23.523 START TEST nvme_multi_aen 00:08:23.523 ************************************ 00:08:23.523 12:20:28 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:23.523 [2024-12-16 12:20:29.116185] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:08:23.523 [2024-12-16 12:20:29.116789] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:08:23.523 [2024-12-16 12:20:29.116873] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:08:23.523 [2024-12-16 12:20:29.118273] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:08:23.523 [2024-12-16 12:20:29.118389] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:08:23.523 [2024-12-16 12:20:29.118539] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:08:23.523 [2024-12-16 12:20:29.119703] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:08:23.523 [2024-12-16 12:20:29.119874] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:08:23.523 [2024-12-16 12:20:29.119990] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:08:23.523 [2024-12-16 12:20:29.121055] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:08:23.523 [2024-12-16 12:20:29.121219] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:08:23.523 [2024-12-16 12:20:29.121344] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64956) is not found. Dropping the request. 00:08:23.523 Child process pid: 65482 00:08:23.523 [Child] Asynchronous Event Request test 00:08:23.523 [Child] Attached to 0000:00:10.0 00:08:23.523 [Child] Attached to 0000:00:11.0 00:08:23.523 [Child] Attached to 0000:00:13.0 00:08:23.523 [Child] Attached to 0000:00:12.0 00:08:23.523 [Child] Registering asynchronous event callbacks... 00:08:23.523 [Child] Getting orig temperature thresholds of all controllers 00:08:23.523 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:23.523 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:23.523 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:23.523 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:23.523 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:23.523 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:23.523 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:23.523 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:23.523 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:23.523 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.523 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.523 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.523 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.523 [Child] Cleaning up... 00:08:23.523 Asynchronous Event Request test 00:08:23.523 Attached to 0000:00:10.0 00:08:23.523 Attached to 0000:00:11.0 00:08:23.523 Attached to 0000:00:13.0 00:08:23.523 Attached to 0000:00:12.0 00:08:23.523 Reset controller to setup AER completions for this process 00:08:23.523 Registering asynchronous event callbacks... 00:08:23.523 Getting orig temperature thresholds of all controllers 00:08:23.523 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:23.523 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:23.523 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:23.523 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:23.523 Setting all controllers temperature threshold low to trigger AER 00:08:23.523 Waiting for all controllers temperature threshold to be set lower 00:08:23.523 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:23.523 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:23.523 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:23.523 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:23.523 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:23.523 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:23.523 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:23.523 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:23.523 Waiting for all controllers to trigger AER and reset threshold 00:08:23.523 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.523 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.523 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.523 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.523 Cleaning up... 00:08:23.523 00:08:23.523 real 0m0.436s 00:08:23.523 user 0m0.151s 00:08:23.523 sys 0m0.179s 00:08:23.523 ************************************ 00:08:23.523 END TEST nvme_multi_aen 00:08:23.523 ************************************ 00:08:23.523 12:20:29 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.523 12:20:29 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:23.523 12:20:29 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:23.523 12:20:29 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:23.523 12:20:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.523 12:20:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.523 ************************************ 00:08:23.523 START TEST nvme_startup 00:08:23.523 ************************************ 00:08:23.523 12:20:29 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:23.523 Initializing NVMe Controllers 00:08:23.523 Attached to 0000:00:10.0 00:08:23.523 Attached to 0000:00:11.0 00:08:23.523 Attached to 0000:00:13.0 00:08:23.523 Attached to 0000:00:12.0 00:08:23.523 Initialization complete. 00:08:23.523 Time used:150194.172 (us). 00:08:23.523 00:08:23.523 real 0m0.213s 00:08:23.523 user 0m0.066s 00:08:23.523 sys 0m0.101s 00:08:23.523 ************************************ 00:08:23.523 END TEST nvme_startup 00:08:23.523 ************************************ 00:08:23.523 12:20:29 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.523 12:20:29 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:23.523 12:20:29 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:23.523 12:20:29 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:23.523 12:20:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.523 12:20:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.523 ************************************ 00:08:23.523 START TEST nvme_multi_secondary 00:08:23.523 ************************************ 00:08:23.523 12:20:29 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:23.523 12:20:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65532 00:08:23.523 12:20:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:23.523 12:20:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65533 00:08:23.523 12:20:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:23.523 12:20:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:26.053 Initializing NVMe Controllers 00:08:26.053 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:26.053 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:26.053 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:26.053 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:26.053 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:26.053 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:26.053 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:26.053 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:26.053 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:26.053 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:26.053 Initialization complete. Launching workers. 00:08:26.053 ======================================================== 00:08:26.053 Latency(us) 00:08:26.053 Device Information : IOPS MiB/s Average min max 00:08:26.053 PCIE (0000:00:10.0) NSID 1 from core 1: 7737.11 30.22 2066.59 935.26 5762.78 00:08:26.053 PCIE (0000:00:11.0) NSID 1 from core 1: 7737.11 30.22 2067.53 1052.37 5586.45 00:08:26.053 PCIE (0000:00:13.0) NSID 1 from core 1: 7737.11 30.22 2067.49 1006.09 5876.28 00:08:26.053 PCIE (0000:00:12.0) NSID 1 from core 1: 7737.11 30.22 2067.46 1082.28 5896.39 00:08:26.053 PCIE (0000:00:12.0) NSID 2 from core 1: 7737.11 30.22 2067.43 1067.28 5837.34 00:08:26.053 PCIE (0000:00:12.0) NSID 3 from core 1: 7737.11 30.22 2067.41 972.18 5976.71 00:08:26.053 ======================================================== 00:08:26.053 Total : 46422.64 181.34 2067.32 935.26 5976.71 00:08:26.053 00:08:26.053 Initializing NVMe Controllers 00:08:26.053 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:26.053 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:26.053 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:26.053 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:26.053 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:26.053 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:26.053 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:26.053 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:26.053 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:26.053 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:26.053 Initialization complete. Launching workers. 00:08:26.053 ======================================================== 00:08:26.053 Latency(us) 00:08:26.053 Device Information : IOPS MiB/s Average min max 00:08:26.053 PCIE (0000:00:10.0) NSID 1 from core 2: 3273.66 12.79 4885.85 768.05 13533.18 00:08:26.053 PCIE (0000:00:11.0) NSID 1 from core 2: 3273.66 12.79 4887.08 871.94 12704.54 00:08:26.053 PCIE (0000:00:13.0) NSID 1 from core 2: 3273.66 12.79 4887.08 968.78 13317.35 00:08:26.053 PCIE (0000:00:12.0) NSID 1 from core 2: 3273.66 12.79 4886.58 978.49 13346.23 00:08:26.053 PCIE (0000:00:12.0) NSID 2 from core 2: 3273.66 12.79 4886.85 765.52 13939.11 00:08:26.053 PCIE (0000:00:12.0) NSID 3 from core 2: 3273.66 12.79 4886.76 772.71 13935.31 00:08:26.053 ======================================================== 00:08:26.053 Total : 19641.96 76.73 4886.70 765.52 13939.11 00:08:26.053 00:08:26.053 12:20:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65532 00:08:27.956 Initializing NVMe Controllers 00:08:27.956 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:27.956 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:27.956 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:27.956 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:27.956 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:27.956 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:27.956 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:27.956 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:27.956 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:27.956 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:27.956 Initialization complete. Launching workers. 00:08:27.956 ======================================================== 00:08:27.956 Latency(us) 00:08:27.956 Device Information : IOPS MiB/s Average min max 00:08:27.956 PCIE (0000:00:10.0) NSID 1 from core 0: 11053.47 43.18 1446.27 692.66 6666.28 00:08:27.956 PCIE (0000:00:11.0) NSID 1 from core 0: 11053.47 43.18 1447.13 677.37 6036.19 00:08:27.956 PCIE (0000:00:13.0) NSID 1 from core 0: 11053.47 43.18 1447.10 706.99 10218.61 00:08:27.956 PCIE (0000:00:12.0) NSID 1 from core 0: 11053.47 43.18 1447.08 705.68 10365.51 00:08:27.956 PCIE (0000:00:12.0) NSID 2 from core 0: 11053.47 43.18 1447.06 710.38 10502.84 00:08:27.956 PCIE (0000:00:12.0) NSID 3 from core 0: 11053.47 43.18 1447.04 708.86 10536.12 00:08:27.956 ======================================================== 00:08:27.956 Total : 66320.80 259.07 1446.95 677.37 10536.12 00:08:27.956 00:08:27.956 12:20:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65533 00:08:27.956 12:20:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65603 00:08:27.956 12:20:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:27.956 12:20:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65605 00:08:27.956 12:20:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:27.956 12:20:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:31.250 Initializing NVMe Controllers 00:08:31.250 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:31.250 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:31.250 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:31.250 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:31.250 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:31.250 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:31.250 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:31.250 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:31.250 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:31.250 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:31.250 Initialization complete. Launching workers. 00:08:31.250 ======================================================== 00:08:31.250 Latency(us) 00:08:31.250 Device Information : IOPS MiB/s Average min max 00:08:31.250 PCIE (0000:00:10.0) NSID 1 from core 0: 7991.65 31.22 2000.75 697.04 5199.17 00:08:31.250 PCIE (0000:00:11.0) NSID 1 from core 0: 7991.65 31.22 2001.75 722.21 5352.46 00:08:31.250 PCIE (0000:00:13.0) NSID 1 from core 0: 7991.65 31.22 2001.81 722.00 5050.58 00:08:31.250 PCIE (0000:00:12.0) NSID 1 from core 0: 7991.65 31.22 2001.78 714.56 5354.62 00:08:31.250 PCIE (0000:00:12.0) NSID 2 from core 0: 7991.65 31.22 2001.75 724.61 5611.27 00:08:31.250 PCIE (0000:00:12.0) NSID 3 from core 0: 7991.65 31.22 2001.87 720.11 5549.63 00:08:31.250 ======================================================== 00:08:31.250 Total : 47949.87 187.30 2001.62 697.04 5611.27 00:08:31.250 00:08:31.250 Initializing NVMe Controllers 00:08:31.250 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:31.250 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:31.250 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:31.250 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:31.250 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:31.250 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:31.250 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:31.250 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:31.250 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:31.250 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:31.250 Initialization complete. Launching workers. 00:08:31.250 ======================================================== 00:08:31.250 Latency(us) 00:08:31.250 Device Information : IOPS MiB/s Average min max 00:08:31.250 PCIE (0000:00:10.0) NSID 1 from core 1: 7791.73 30.44 2052.06 715.06 6503.36 00:08:31.250 PCIE (0000:00:11.0) NSID 1 from core 1: 7791.73 30.44 2052.97 733.72 6518.74 00:08:31.251 PCIE (0000:00:13.0) NSID 1 from core 1: 7791.73 30.44 2052.91 670.42 6058.51 00:08:31.251 PCIE (0000:00:12.0) NSID 1 from core 1: 7791.73 30.44 2052.85 634.12 6008.67 00:08:31.251 PCIE (0000:00:12.0) NSID 2 from core 1: 7791.73 30.44 2052.81 609.24 6053.58 00:08:31.251 PCIE (0000:00:12.0) NSID 3 from core 1: 7791.73 30.44 2052.84 703.84 5858.42 00:08:31.251 ======================================================== 00:08:31.251 Total : 46750.39 182.62 2052.74 609.24 6518.74 00:08:31.251 00:08:33.782 Initializing NVMe Controllers 00:08:33.782 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:33.782 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:33.782 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:33.782 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:33.782 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:33.782 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:33.782 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:33.782 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:33.782 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:33.782 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:33.782 Initialization complete. Launching workers. 00:08:33.782 ======================================================== 00:08:33.782 Latency(us) 00:08:33.782 Device Information : IOPS MiB/s Average min max 00:08:33.782 PCIE (0000:00:10.0) NSID 1 from core 2: 4871.98 19.03 3282.27 702.19 15755.96 00:08:33.782 PCIE (0000:00:11.0) NSID 1 from core 2: 4871.98 19.03 3283.44 694.48 16068.13 00:08:33.782 PCIE (0000:00:13.0) NSID 1 from core 2: 4871.98 19.03 3283.38 739.42 15151.09 00:08:33.782 PCIE (0000:00:12.0) NSID 1 from core 2: 4871.98 19.03 3283.32 744.32 14592.98 00:08:33.782 PCIE (0000:00:12.0) NSID 2 from core 2: 4871.98 19.03 3283.26 743.94 14925.41 00:08:33.782 PCIE (0000:00:12.0) NSID 3 from core 2: 4871.98 19.03 3283.21 743.95 15169.26 00:08:33.782 ======================================================== 00:08:33.782 Total : 29231.88 114.19 3283.15 694.48 16068.13 00:08:33.782 00:08:33.782 ************************************ 00:08:33.782 END TEST nvme_multi_secondary 00:08:33.782 ************************************ 00:08:33.782 12:20:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65603 00:08:33.783 12:20:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65605 00:08:33.783 00:08:33.783 real 0m10.734s 00:08:33.783 user 0m18.418s 00:08:33.783 sys 0m0.618s 00:08:33.783 12:20:40 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.783 12:20:40 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:33.783 12:20:40 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:33.783 12:20:40 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:33.783 12:20:40 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64564 ]] 00:08:33.783 12:20:40 nvme -- common/autotest_common.sh@1094 -- # kill 64564 00:08:33.783 12:20:40 nvme -- common/autotest_common.sh@1095 -- # wait 64564 00:08:33.783 [2024-12-16 12:20:40.437893] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65481) is not found. Dropping the request. 00:08:33.783 [2024-12-16 12:20:40.438122] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65481) is not found. Dropping the request. 00:08:33.783 [2024-12-16 12:20:40.438177] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65481) is not found. Dropping the request. 00:08:33.783 [2024-12-16 12:20:40.438198] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65481) is not found. Dropping the request. 00:08:33.783 [2024-12-16 12:20:40.440646] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65481) is not found. Dropping the request. 00:08:33.783 [2024-12-16 12:20:40.440703] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65481) is not found. Dropping the request. 00:08:33.783 [2024-12-16 12:20:40.440721] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65481) is not found. Dropping the request. 00:08:33.783 [2024-12-16 12:20:40.440739] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65481) is not found. Dropping the request. 00:08:33.783 [2024-12-16 12:20:40.443171] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65481) is not found. Dropping the request. 00:08:33.783 [2024-12-16 12:20:40.443224] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65481) is not found. Dropping the request. 00:08:33.783 [2024-12-16 12:20:40.443241] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65481) is not found. Dropping the request. 00:08:33.783 [2024-12-16 12:20:40.443259] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65481) is not found. Dropping the request. 00:08:33.783 [2024-12-16 12:20:40.445690] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65481) is not found. Dropping the request. 00:08:33.783 [2024-12-16 12:20:40.445748] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65481) is not found. Dropping the request. 00:08:33.783 [2024-12-16 12:20:40.445765] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65481) is not found. Dropping the request. 00:08:33.783 [2024-12-16 12:20:40.445783] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65481) is not found. Dropping the request. 00:08:33.783 [2024-12-16 12:20:40.559153] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:08:33.783 12:20:40 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:33.783 12:20:40 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:33.783 12:20:40 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:33.783 12:20:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.783 12:20:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.783 12:20:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.783 ************************************ 00:08:33.783 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:33.783 ************************************ 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:33.783 * Looking for test storage... 00:08:33.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:33.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.783 --rc genhtml_branch_coverage=1 00:08:33.783 --rc genhtml_function_coverage=1 00:08:33.783 --rc genhtml_legend=1 00:08:33.783 --rc geninfo_all_blocks=1 00:08:33.783 --rc geninfo_unexecuted_blocks=1 00:08:33.783 00:08:33.783 ' 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:33.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.783 --rc genhtml_branch_coverage=1 00:08:33.783 --rc genhtml_function_coverage=1 00:08:33.783 --rc genhtml_legend=1 00:08:33.783 --rc geninfo_all_blocks=1 00:08:33.783 --rc geninfo_unexecuted_blocks=1 00:08:33.783 00:08:33.783 ' 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:33.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.783 --rc genhtml_branch_coverage=1 00:08:33.783 --rc genhtml_function_coverage=1 00:08:33.783 --rc genhtml_legend=1 00:08:33.783 --rc geninfo_all_blocks=1 00:08:33.783 --rc geninfo_unexecuted_blocks=1 00:08:33.783 00:08:33.783 ' 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:33.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.783 --rc genhtml_branch_coverage=1 00:08:33.783 --rc genhtml_function_coverage=1 00:08:33.783 --rc genhtml_legend=1 00:08:33.783 --rc geninfo_all_blocks=1 00:08:33.783 --rc geninfo_unexecuted_blocks=1 00:08:33.783 00:08:33.783 ' 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:33.783 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65765 00:08:33.784 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:33.784 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:33.784 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65765 00:08:33.784 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65765 ']' 00:08:33.784 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.784 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.784 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.784 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.784 12:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:33.784 [2024-12-16 12:20:40.880559] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:33.784 [2024-12-16 12:20:40.880842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65765 ] 00:08:34.045 [2024-12-16 12:20:41.049747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.304 [2024-12-16 12:20:41.177905] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.304 [2024-12-16 12:20:41.178243] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.304 [2024-12-16 12:20:41.178610] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.304 [2024-12-16 12:20:41.178612] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.870 12:20:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.870 12:20:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:34.870 12:20:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:34.870 12:20:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.870 12:20:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:34.870 nvme0n1 00:08:34.870 12:20:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.870 12:20:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:34.870 12:20:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_uRFNY.txt 00:08:34.870 12:20:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:34.870 12:20:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.870 12:20:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:34.870 true 00:08:34.870 12:20:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.870 12:20:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:34.870 12:20:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1734351641 00:08:34.870 12:20:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65788 00:08:34.870 12:20:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:34.870 12:20:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:34.870 12:20:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:36.772 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:37.032 [2024-12-16 12:20:43.883256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:37.032 [2024-12-16 12:20:43.883866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:37.032 [2024-12-16 12:20:43.883971] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:37.032 [2024-12-16 12:20:43.884024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:37.032 [2024-12-16 12:20:43.885843] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:37.032 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65788 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65788 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65788 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_uRFNY.txt 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_uRFNY.txt 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65765 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65765 ']' 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65765 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65765 00:08:37.032 killing process with pid 65765 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65765' 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65765 00:08:37.032 12:20:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65765 00:08:38.407 12:20:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:38.407 12:20:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:38.407 ************************************ 00:08:38.407 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:38.407 ************************************ 00:08:38.407 00:08:38.407 real 0m4.584s 00:08:38.407 user 0m16.113s 00:08:38.407 sys 0m0.531s 00:08:38.407 12:20:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.407 12:20:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:38.407 12:20:45 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:38.407 12:20:45 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:38.407 12:20:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:38.407 12:20:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.407 12:20:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.407 ************************************ 00:08:38.407 START TEST nvme_fio 00:08:38.407 ************************************ 00:08:38.407 12:20:45 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:38.407 12:20:45 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:38.407 12:20:45 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:38.407 12:20:45 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:38.407 12:20:45 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:38.407 12:20:45 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:38.407 12:20:45 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:38.407 12:20:45 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:38.407 12:20:45 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:38.407 12:20:45 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:38.407 12:20:45 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:38.407 12:20:45 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:38.407 12:20:45 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:38.407 12:20:45 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:38.407 12:20:45 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:38.407 12:20:45 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:38.407 12:20:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:38.407 12:20:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:38.667 12:20:45 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:38.667 12:20:45 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:38.667 12:20:45 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:38.667 12:20:45 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:38.667 12:20:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:38.667 12:20:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:38.667 12:20:45 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:38.667 12:20:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:38.667 12:20:45 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:38.667 12:20:45 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:38.667 12:20:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:38.667 12:20:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:38.667 12:20:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:38.667 12:20:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:38.667 12:20:45 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:38.667 12:20:45 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:38.667 12:20:45 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:38.667 12:20:45 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:38.925 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:38.926 fio-3.35 00:08:38.926 Starting 1 thread 00:08:45.502 00:08:45.502 test: (groupid=0, jobs=1): err= 0: pid=65922: Mon Dec 16 12:20:51 2024 00:08:45.502 read: IOPS=22.7k, BW=88.7MiB/s (93.0MB/s)(177MiB/2001msec) 00:08:45.502 slat (usec): min=3, max=467, avg= 5.04, stdev= 3.48 00:08:45.502 clat (usec): min=680, max=8820, avg=2812.62, stdev=958.34 00:08:45.502 lat (usec): min=691, max=8824, avg=2817.66, stdev=959.72 00:08:45.502 clat percentiles (usec): 00:08:45.502 | 1.00th=[ 1713], 5.00th=[ 2089], 10.00th=[ 2212], 20.00th=[ 2343], 00:08:45.502 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2507], 60.00th=[ 2573], 00:08:45.502 | 70.00th=[ 2704], 80.00th=[ 2933], 90.00th=[ 3851], 95.00th=[ 5276], 00:08:45.502 | 99.00th=[ 6652], 99.50th=[ 7111], 99.90th=[ 7898], 99.95th=[ 8160], 00:08:45.502 | 99.99th=[ 8717] 00:08:45.502 bw ( KiB/s): min=81176, max=98072, per=100.00%, avg=91098.67, stdev=8825.68, samples=3 00:08:45.502 iops : min=20294, max=24518, avg=22774.67, stdev=2206.42, samples=3 00:08:45.502 write: IOPS=22.6k, BW=88.2MiB/s (92.4MB/s)(176MiB/2001msec); 0 zone resets 00:08:45.502 slat (usec): min=3, max=266, avg= 5.16, stdev= 2.79 00:08:45.502 clat (usec): min=639, max=8702, avg=2821.61, stdev=959.30 00:08:45.502 lat (usec): min=651, max=8706, avg=2826.78, stdev=960.57 00:08:45.502 clat percentiles (usec): 00:08:45.502 | 1.00th=[ 1729], 5.00th=[ 2089], 10.00th=[ 2212], 20.00th=[ 2343], 00:08:45.502 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2507], 60.00th=[ 2606], 00:08:45.502 | 70.00th=[ 2737], 80.00th=[ 2966], 90.00th=[ 3851], 95.00th=[ 5276], 00:08:45.502 | 99.00th=[ 6652], 99.50th=[ 7111], 99.90th=[ 7898], 99.95th=[ 8094], 00:08:45.502 | 99.99th=[ 8586] 00:08:45.502 bw ( KiB/s): min=82424, max=97464, per=100.00%, avg=91261.33, stdev=7858.53, samples=3 00:08:45.502 iops : min=20606, max=24366, avg=22815.33, stdev=1964.63, samples=3 00:08:45.502 lat (usec) : 750=0.01%, 1000=0.02% 00:08:45.502 lat (msec) : 2=2.84%, 4=87.75%, 10=9.39% 00:08:45.502 cpu : usr=98.55%, sys=0.35%, ctx=15, majf=0, minf=608 00:08:45.502 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:45.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:45.502 issued rwts: total=45430,45156,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:45.502 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:45.502 00:08:45.502 Run status group 0 (all jobs): 00:08:45.502 READ: bw=88.7MiB/s (93.0MB/s), 88.7MiB/s-88.7MiB/s (93.0MB/s-93.0MB/s), io=177MiB (186MB), run=2001-2001msec 00:08:45.502 WRITE: bw=88.2MiB/s (92.4MB/s), 88.2MiB/s-88.2MiB/s (92.4MB/s-92.4MB/s), io=176MiB (185MB), run=2001-2001msec 00:08:45.502 ----------------------------------------------------- 00:08:45.502 Suppressions used: 00:08:45.502 count bytes template 00:08:45.502 1 32 /usr/src/fio/parse.c 00:08:45.502 1 8 libtcmalloc_minimal.so 00:08:45.502 ----------------------------------------------------- 00:08:45.502 00:08:45.502 12:20:51 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:45.502 12:20:51 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:45.502 12:20:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:45.502 12:20:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:45.502 12:20:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:45.502 12:20:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:45.502 12:20:52 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:45.502 12:20:52 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:45.502 12:20:52 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:45.502 12:20:52 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:45.503 12:20:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:45.503 12:20:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:45.503 12:20:52 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:45.503 12:20:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:45.503 12:20:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:45.503 12:20:52 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:45.503 12:20:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:45.503 12:20:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:45.503 12:20:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:45.503 12:20:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:45.503 12:20:52 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:45.503 12:20:52 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:45.503 12:20:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:45.503 12:20:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:45.503 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:45.503 fio-3.35 00:08:45.503 Starting 1 thread 00:08:50.801 00:08:50.801 test: (groupid=0, jobs=1): err= 0: pid=65978: Mon Dec 16 12:20:57 2024 00:08:50.801 read: IOPS=20.1k, BW=78.4MiB/s (82.2MB/s)(157MiB/2001msec) 00:08:50.801 slat (nsec): min=3366, max=73190, avg=5322.11, stdev=2646.79 00:08:50.801 clat (usec): min=336, max=10069, avg=3162.78, stdev=1102.35 00:08:50.802 lat (usec): min=341, max=10091, avg=3168.10, stdev=1103.51 00:08:50.802 clat percentiles (usec): 00:08:50.802 | 1.00th=[ 1811], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2376], 00:08:50.802 | 30.00th=[ 2507], 40.00th=[ 2638], 50.00th=[ 2769], 60.00th=[ 2933], 00:08:50.802 | 70.00th=[ 3195], 80.00th=[ 3884], 90.00th=[ 4883], 95.00th=[ 5538], 00:08:50.802 | 99.00th=[ 6849], 99.50th=[ 7308], 99.90th=[ 8356], 99.95th=[ 8717], 00:08:50.802 | 99.99th=[ 9896] 00:08:50.802 bw ( KiB/s): min=69872, max=90360, per=100.00%, avg=81413.33, stdev=10487.55, samples=3 00:08:50.802 iops : min=17468, max=22590, avg=20353.33, stdev=2621.89, samples=3 00:08:50.802 write: IOPS=20.0k, BW=78.2MiB/s (82.0MB/s)(157MiB/2001msec); 0 zone resets 00:08:50.802 slat (nsec): min=3521, max=72535, avg=5439.36, stdev=2660.50 00:08:50.802 clat (usec): min=407, max=10009, avg=3194.99, stdev=1114.55 00:08:50.802 lat (usec): min=412, max=10015, avg=3200.43, stdev=1115.69 00:08:50.802 clat percentiles (usec): 00:08:50.802 | 1.00th=[ 1811], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2409], 00:08:50.802 | 30.00th=[ 2540], 40.00th=[ 2671], 50.00th=[ 2802], 60.00th=[ 2966], 00:08:50.802 | 70.00th=[ 3228], 80.00th=[ 3884], 90.00th=[ 4948], 95.00th=[ 5604], 00:08:50.802 | 99.00th=[ 6915], 99.50th=[ 7308], 99.90th=[ 8455], 99.95th=[ 8848], 00:08:50.802 | 99.99th=[ 9765] 00:08:50.802 bw ( KiB/s): min=69824, max=90624, per=100.00%, avg=81530.67, stdev=10643.41, samples=3 00:08:50.802 iops : min=17456, max=22656, avg=20382.67, stdev=2660.85, samples=3 00:08:50.802 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.07% 00:08:50.802 lat (msec) : 2=1.60%, 4=79.50%, 10=18.80%, 20=0.01% 00:08:50.802 cpu : usr=98.95%, sys=0.10%, ctx=5, majf=0, minf=607 00:08:50.802 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:50.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:50.802 issued rwts: total=40174,40073,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.802 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:50.802 00:08:50.802 Run status group 0 (all jobs): 00:08:50.802 READ: bw=78.4MiB/s (82.2MB/s), 78.4MiB/s-78.4MiB/s (82.2MB/s-82.2MB/s), io=157MiB (165MB), run=2001-2001msec 00:08:50.802 WRITE: bw=78.2MiB/s (82.0MB/s), 78.2MiB/s-78.2MiB/s (82.0MB/s-82.0MB/s), io=157MiB (164MB), run=2001-2001msec 00:08:51.063 ----------------------------------------------------- 00:08:51.063 Suppressions used: 00:08:51.063 count bytes template 00:08:51.063 1 32 /usr/src/fio/parse.c 00:08:51.063 1 8 libtcmalloc_minimal.so 00:08:51.063 ----------------------------------------------------- 00:08:51.063 00:08:51.063 12:20:58 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:51.063 12:20:58 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:51.063 12:20:58 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:51.063 12:20:58 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:51.323 12:20:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:51.323 12:20:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:51.584 12:20:58 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:51.584 12:20:58 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:51.584 12:20:58 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:51.584 12:20:58 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:51.584 12:20:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:51.584 12:20:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:51.584 12:20:58 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:51.584 12:20:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:51.584 12:20:58 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:51.584 12:20:58 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:51.584 12:20:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:51.584 12:20:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:51.584 12:20:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:51.584 12:20:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:51.584 12:20:58 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:51.584 12:20:58 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:51.584 12:20:58 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:51.584 12:20:58 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:51.584 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:51.584 fio-3.35 00:08:51.584 Starting 1 thread 00:08:56.875 00:08:56.875 test: (groupid=0, jobs=1): err= 0: pid=66039: Mon Dec 16 12:21:03 2024 00:08:56.875 read: IOPS=15.6k, BW=61.1MiB/s (64.1MB/s)(122MiB/2001msec) 00:08:56.875 slat (usec): min=3, max=161, avg= 6.71, stdev= 4.37 00:08:56.875 clat (usec): min=618, max=10530, avg=4059.57, stdev=1547.96 00:08:56.875 lat (usec): min=631, max=10549, avg=4066.28, stdev=1549.63 00:08:56.875 clat percentiles (usec): 00:08:56.875 | 1.00th=[ 1860], 5.00th=[ 2540], 10.00th=[ 2704], 20.00th=[ 2900], 00:08:56.875 | 30.00th=[ 3032], 40.00th=[ 3195], 50.00th=[ 3359], 60.00th=[ 3752], 00:08:56.875 | 70.00th=[ 4555], 80.00th=[ 5473], 90.00th=[ 6456], 95.00th=[ 7177], 00:08:56.875 | 99.00th=[ 8586], 99.50th=[ 9110], 99.90th=[ 9896], 99.95th=[10159], 00:08:56.875 | 99.99th=[10421] 00:08:56.875 bw ( KiB/s): min=54592, max=71952, per=100.00%, avg=64066.67, stdev=8788.45, samples=3 00:08:56.875 iops : min=13648, max=17988, avg=16016.67, stdev=2197.11, samples=3 00:08:56.875 write: IOPS=15.7k, BW=61.2MiB/s (64.1MB/s)(122MiB/2001msec); 0 zone resets 00:08:56.875 slat (usec): min=3, max=197, avg= 6.83, stdev= 4.04 00:08:56.875 clat (usec): min=799, max=11069, avg=4091.87, stdev=1536.01 00:08:56.875 lat (usec): min=812, max=11109, avg=4098.70, stdev=1537.58 00:08:56.875 clat percentiles (usec): 00:08:56.875 | 1.00th=[ 1893], 5.00th=[ 2573], 10.00th=[ 2737], 20.00th=[ 2933], 00:08:56.875 | 30.00th=[ 3064], 40.00th=[ 3228], 50.00th=[ 3425], 60.00th=[ 3818], 00:08:56.875 | 70.00th=[ 4621], 80.00th=[ 5473], 90.00th=[ 6521], 95.00th=[ 7177], 00:08:56.875 | 99.00th=[ 8586], 99.50th=[ 9110], 99.90th=[ 9896], 99.95th=[10159], 00:08:56.875 | 99.99th=[10552] 00:08:56.875 bw ( KiB/s): min=53928, max=71840, per=100.00%, avg=63805.33, stdev=9097.06, samples=3 00:08:56.875 iops : min=13482, max=17960, avg=15951.33, stdev=2274.27, samples=3 00:08:56.875 lat (usec) : 750=0.01%, 1000=0.01% 00:08:56.875 lat (msec) : 2=1.19%, 4=61.97%, 10=36.77%, 20=0.07% 00:08:56.875 cpu : usr=98.60%, sys=0.05%, ctx=5, majf=0, minf=607 00:08:56.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:56.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:56.875 issued rwts: total=31299,31328,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.875 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:56.875 00:08:56.875 Run status group 0 (all jobs): 00:08:56.875 READ: bw=61.1MiB/s (64.1MB/s), 61.1MiB/s-61.1MiB/s (64.1MB/s-64.1MB/s), io=122MiB (128MB), run=2001-2001msec 00:08:56.875 WRITE: bw=61.2MiB/s (64.1MB/s), 61.2MiB/s-61.2MiB/s (64.1MB/s-64.1MB/s), io=122MiB (128MB), run=2001-2001msec 00:08:56.875 ----------------------------------------------------- 00:08:56.875 Suppressions used: 00:08:56.875 count bytes template 00:08:56.875 1 32 /usr/src/fio/parse.c 00:08:56.875 1 8 libtcmalloc_minimal.so 00:08:56.875 ----------------------------------------------------- 00:08:56.875 00:08:56.875 12:21:03 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:56.875 12:21:03 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:56.875 12:21:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:56.875 12:21:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:56.875 12:21:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:56.875 12:21:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:57.135 12:21:04 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:57.135 12:21:04 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:57.135 12:21:04 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:57.135 12:21:04 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:57.135 12:21:04 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:57.135 12:21:04 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:57.135 12:21:04 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:57.135 12:21:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:57.135 12:21:04 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:57.135 12:21:04 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:57.135 12:21:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:57.135 12:21:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:57.135 12:21:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:57.135 12:21:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:57.135 12:21:04 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:57.135 12:21:04 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:57.135 12:21:04 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:57.135 12:21:04 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:57.396 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:57.396 fio-3.35 00:08:57.396 Starting 1 thread 00:09:05.534 00:09:05.534 test: (groupid=0, jobs=1): err= 0: pid=66101: Mon Dec 16 12:21:11 2024 00:09:05.534 read: IOPS=18.4k, BW=72.0MiB/s (75.5MB/s)(144MiB/2001msec) 00:09:05.534 slat (nsec): min=3327, max=71199, avg=5767.33, stdev=3371.90 00:09:05.534 clat (usec): min=191, max=14048, avg=3454.59, stdev=1428.21 00:09:05.534 lat (usec): min=196, max=14095, avg=3460.35, stdev=1430.05 00:09:05.534 clat percentiles (usec): 00:09:05.534 | 1.00th=[ 1532], 5.00th=[ 2089], 10.00th=[ 2212], 20.00th=[ 2409], 00:09:05.534 | 30.00th=[ 2540], 40.00th=[ 2671], 50.00th=[ 2868], 60.00th=[ 3130], 00:09:05.534 | 70.00th=[ 3720], 80.00th=[ 4686], 90.00th=[ 5800], 95.00th=[ 6456], 00:09:05.534 | 99.00th=[ 7439], 99.50th=[ 7701], 99.90th=[ 8979], 99.95th=[11863], 00:09:05.534 | 99.99th=[13960] 00:09:05.534 bw ( KiB/s): min=70800, max=73704, per=98.15%, avg=72338.67, stdev=1459.74, samples=3 00:09:05.534 iops : min=17700, max=18426, avg=18084.67, stdev=364.93, samples=3 00:09:05.534 write: IOPS=18.4k, BW=72.0MiB/s (75.5MB/s)(144MiB/2001msec); 0 zone resets 00:09:05.534 slat (nsec): min=3370, max=72876, avg=5872.12, stdev=3416.20 00:09:05.534 clat (usec): min=207, max=13948, avg=3464.40, stdev=1425.40 00:09:05.534 lat (usec): min=212, max=13970, avg=3470.27, stdev=1427.21 00:09:05.534 clat percentiles (usec): 00:09:05.534 | 1.00th=[ 1532], 5.00th=[ 2114], 10.00th=[ 2212], 20.00th=[ 2409], 00:09:05.534 | 30.00th=[ 2573], 40.00th=[ 2704], 50.00th=[ 2900], 60.00th=[ 3163], 00:09:05.534 | 70.00th=[ 3720], 80.00th=[ 4686], 90.00th=[ 5735], 95.00th=[ 6456], 00:09:05.534 | 99.00th=[ 7439], 99.50th=[ 7898], 99.90th=[ 9896], 99.95th=[11994], 00:09:05.534 | 99.99th=[13698] 00:09:05.534 bw ( KiB/s): min=70424, max=73656, per=98.04%, avg=72272.00, stdev=1665.21, samples=3 00:09:05.534 iops : min=17606, max=18414, avg=18068.00, stdev=416.30, samples=3 00:09:05.534 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.15% 00:09:05.534 lat (msec) : 2=2.87%, 4=69.62%, 10=27.25%, 20=0.08% 00:09:05.534 cpu : usr=98.90%, sys=0.05%, ctx=3, majf=0, minf=606 00:09:05.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:05.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:05.534 issued rwts: total=36869,36878,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.534 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:05.534 00:09:05.534 Run status group 0 (all jobs): 00:09:05.534 READ: bw=72.0MiB/s (75.5MB/s), 72.0MiB/s-72.0MiB/s (75.5MB/s-75.5MB/s), io=144MiB (151MB), run=2001-2001msec 00:09:05.534 WRITE: bw=72.0MiB/s (75.5MB/s), 72.0MiB/s-72.0MiB/s (75.5MB/s-75.5MB/s), io=144MiB (151MB), run=2001-2001msec 00:09:05.534 ----------------------------------------------------- 00:09:05.534 Suppressions used: 00:09:05.534 count bytes template 00:09:05.534 1 32 /usr/src/fio/parse.c 00:09:05.534 1 8 libtcmalloc_minimal.so 00:09:05.534 ----------------------------------------------------- 00:09:05.534 00:09:05.534 ************************************ 00:09:05.534 END TEST nvme_fio 00:09:05.534 ************************************ 00:09:05.534 12:21:11 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:05.534 12:21:11 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:05.534 00:09:05.534 real 0m26.462s 00:09:05.534 user 0m20.644s 00:09:05.534 sys 0m8.014s 00:09:05.534 12:21:11 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.534 12:21:11 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:05.534 ************************************ 00:09:05.534 END TEST nvme 00:09:05.534 ************************************ 00:09:05.534 00:09:05.534 real 1m35.341s 00:09:05.534 user 3m40.486s 00:09:05.534 sys 0m18.452s 00:09:05.534 12:21:11 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.534 12:21:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:05.534 12:21:11 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:05.534 12:21:11 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:05.534 12:21:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:05.534 12:21:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.534 12:21:11 -- common/autotest_common.sh@10 -- # set +x 00:09:05.534 ************************************ 00:09:05.534 START TEST nvme_scc 00:09:05.534 ************************************ 00:09:05.535 12:21:11 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:05.535 * Looking for test storage... 00:09:05.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:05.535 12:21:11 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:05.535 12:21:11 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:05.535 12:21:11 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:05.535 12:21:11 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:05.535 12:21:11 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.535 12:21:11 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:05.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.535 --rc genhtml_branch_coverage=1 00:09:05.535 --rc genhtml_function_coverage=1 00:09:05.535 --rc genhtml_legend=1 00:09:05.535 --rc geninfo_all_blocks=1 00:09:05.535 --rc geninfo_unexecuted_blocks=1 00:09:05.535 00:09:05.535 ' 00:09:05.535 12:21:11 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:05.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.535 --rc genhtml_branch_coverage=1 00:09:05.535 --rc genhtml_function_coverage=1 00:09:05.535 --rc genhtml_legend=1 00:09:05.535 --rc geninfo_all_blocks=1 00:09:05.535 --rc geninfo_unexecuted_blocks=1 00:09:05.535 00:09:05.535 ' 00:09:05.535 12:21:11 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:05.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.535 --rc genhtml_branch_coverage=1 00:09:05.535 --rc genhtml_function_coverage=1 00:09:05.535 --rc genhtml_legend=1 00:09:05.535 --rc geninfo_all_blocks=1 00:09:05.535 --rc geninfo_unexecuted_blocks=1 00:09:05.535 00:09:05.535 ' 00:09:05.535 12:21:11 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:05.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.535 --rc genhtml_branch_coverage=1 00:09:05.535 --rc genhtml_function_coverage=1 00:09:05.535 --rc genhtml_legend=1 00:09:05.535 --rc geninfo_all_blocks=1 00:09:05.535 --rc geninfo_unexecuted_blocks=1 00:09:05.535 00:09:05.535 ' 00:09:05.535 12:21:11 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:05.535 12:21:11 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:05.535 12:21:11 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:05.535 12:21:11 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:05.535 12:21:11 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.535 12:21:11 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.535 12:21:11 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.535 12:21:11 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.535 12:21:11 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.535 12:21:11 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:05.535 12:21:11 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.535 12:21:11 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:05.535 12:21:11 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:05.535 12:21:11 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:05.535 12:21:11 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:05.535 12:21:11 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:05.535 12:21:11 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:05.535 12:21:11 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:05.535 12:21:11 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:05.535 12:21:11 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:05.535 12:21:11 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.535 12:21:11 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:05.535 12:21:11 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:05.535 12:21:11 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:05.535 12:21:11 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:05.535 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:05.535 Waiting for block devices as requested 00:09:05.535 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:05.535 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:05.535 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:05.796 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:11.080 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:11.080 12:21:17 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:11.080 12:21:17 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:11.080 12:21:17 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:11.080 12:21:17 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:11.080 12:21:17 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.080 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.081 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.082 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.083 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.084 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.085 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:11.086 12:21:17 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:11.086 12:21:17 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:11.086 12:21:17 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:11.086 12:21:17 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:11.086 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.087 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:11.088 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:11.089 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:11.090 12:21:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.091 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:11.092 12:21:17 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:11.092 12:21:17 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:11.092 12:21:17 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:11.092 12:21:17 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:11.092 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.093 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:11.094 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.095 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:11.096 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:11.096 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:11.096 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.096 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.096 12:21:18 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:11.096 12:21:18 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:11.096 12:21:18 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:11.096 12:21:18 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:11.096 12:21:18 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:11.096 12:21:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:11.096 12:21:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.097 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:11.098 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.099 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:11.100 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:11.101 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:11.102 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.103 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:11.104 12:21:18 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:11.104 12:21:18 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:11.104 12:21:18 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:11.104 12:21:18 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:11.104 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.105 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:11.106 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:11.107 12:21:18 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:11.107 12:21:18 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:11.108 12:21:18 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:11.108 12:21:18 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:11.108 12:21:18 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:11.108 12:21:18 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:11.674 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:11.932 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:12.190 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:12.190 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:12.190 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:12.190 12:21:19 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:12.190 12:21:19 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:12.190 12:21:19 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.190 12:21:19 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:12.190 ************************************ 00:09:12.190 START TEST nvme_simple_copy 00:09:12.190 ************************************ 00:09:12.190 12:21:19 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:12.450 Initializing NVMe Controllers 00:09:12.450 Attaching to 0000:00:10.0 00:09:12.450 Controller supports SCC. Attached to 0000:00:10.0 00:09:12.450 Namespace ID: 1 size: 6GB 00:09:12.450 Initialization complete. 00:09:12.450 00:09:12.450 Controller QEMU NVMe Ctrl (12340 ) 00:09:12.450 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:12.450 Namespace Block Size:4096 00:09:12.450 Writing LBAs 0 to 63 with Random Data 00:09:12.450 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:12.450 LBAs matching Written Data: 64 00:09:12.450 00:09:12.450 real 0m0.288s 00:09:12.450 user 0m0.120s 00:09:12.450 sys 0m0.066s 00:09:12.450 12:21:19 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.450 ************************************ 00:09:12.450 END TEST nvme_simple_copy 00:09:12.450 ************************************ 00:09:12.450 12:21:19 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:12.450 00:09:12.450 real 0m7.711s 00:09:12.450 user 0m1.101s 00:09:12.450 sys 0m1.438s 00:09:12.450 12:21:19 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.450 12:21:19 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:12.450 ************************************ 00:09:12.450 END TEST nvme_scc 00:09:12.450 ************************************ 00:09:12.450 12:21:19 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:12.450 12:21:19 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:12.450 12:21:19 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:12.450 12:21:19 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:12.450 12:21:19 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:12.450 12:21:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:12.450 12:21:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.450 12:21:19 -- common/autotest_common.sh@10 -- # set +x 00:09:12.450 ************************************ 00:09:12.450 START TEST nvme_fdp 00:09:12.450 ************************************ 00:09:12.450 12:21:19 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:12.711 * Looking for test storage... 00:09:12.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:12.711 12:21:19 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:12.711 12:21:19 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:12.711 12:21:19 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:12.712 12:21:19 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:12.712 12:21:19 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.712 12:21:19 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:12.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.712 --rc genhtml_branch_coverage=1 00:09:12.712 --rc genhtml_function_coverage=1 00:09:12.712 --rc genhtml_legend=1 00:09:12.712 --rc geninfo_all_blocks=1 00:09:12.712 --rc geninfo_unexecuted_blocks=1 00:09:12.712 00:09:12.712 ' 00:09:12.712 12:21:19 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:12.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.712 --rc genhtml_branch_coverage=1 00:09:12.712 --rc genhtml_function_coverage=1 00:09:12.712 --rc genhtml_legend=1 00:09:12.712 --rc geninfo_all_blocks=1 00:09:12.712 --rc geninfo_unexecuted_blocks=1 00:09:12.712 00:09:12.712 ' 00:09:12.712 12:21:19 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:12.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.712 --rc genhtml_branch_coverage=1 00:09:12.712 --rc genhtml_function_coverage=1 00:09:12.712 --rc genhtml_legend=1 00:09:12.712 --rc geninfo_all_blocks=1 00:09:12.712 --rc geninfo_unexecuted_blocks=1 00:09:12.712 00:09:12.712 ' 00:09:12.712 12:21:19 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:12.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.712 --rc genhtml_branch_coverage=1 00:09:12.712 --rc genhtml_function_coverage=1 00:09:12.712 --rc genhtml_legend=1 00:09:12.712 --rc geninfo_all_blocks=1 00:09:12.712 --rc geninfo_unexecuted_blocks=1 00:09:12.712 00:09:12.712 ' 00:09:12.712 12:21:19 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:12.712 12:21:19 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:12.712 12:21:19 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:12.712 12:21:19 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:12.712 12:21:19 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.712 12:21:19 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.712 12:21:19 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.712 12:21:19 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.712 12:21:19 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.712 12:21:19 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:12.712 12:21:19 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.712 12:21:19 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:12.712 12:21:19 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:12.712 12:21:19 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:12.712 12:21:19 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:12.712 12:21:19 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:12.712 12:21:19 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:12.712 12:21:19 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:12.712 12:21:19 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:12.712 12:21:19 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:12.712 12:21:19 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:12.712 12:21:19 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:12.974 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:13.233 Waiting for block devices as requested 00:09:13.233 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:13.233 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:13.233 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:13.492 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.795 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:18.795 12:21:25 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:18.795 12:21:25 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:18.795 12:21:25 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:18.795 12:21:25 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:18.795 12:21:25 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.795 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.796 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:18.797 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.798 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.799 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.800 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.801 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:18.802 12:21:25 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:18.802 12:21:25 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:18.802 12:21:25 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:18.802 12:21:25 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:18.802 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.803 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.804 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.805 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:18.806 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.807 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.808 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:18.809 12:21:25 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:18.809 12:21:25 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:18.809 12:21:25 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:18.809 12:21:25 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.809 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:18.810 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.811 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:18.812 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.813 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.814 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.815 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.816 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.817 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.818 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.819 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:18.820 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.821 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.822 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:18.823 12:21:25 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:18.823 12:21:25 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:18.823 12:21:25 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:18.823 12:21:25 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:18.823 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:18.824 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.825 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:18.826 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:18.827 12:21:25 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:18.827 12:21:25 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:19.089 12:21:25 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:19.089 12:21:25 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:19.089 12:21:25 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:19.089 12:21:25 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:19.350 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:19.922 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.922 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.922 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.922 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.922 12:21:27 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:19.922 12:21:27 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:19.922 12:21:27 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.922 12:21:27 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:19.922 ************************************ 00:09:19.922 START TEST nvme_flexible_data_placement 00:09:19.922 ************************************ 00:09:19.922 12:21:27 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:20.182 Initializing NVMe Controllers 00:09:20.182 Attaching to 0000:00:13.0 00:09:20.182 Controller supports FDP Attached to 0000:00:13.0 00:09:20.182 Namespace ID: 1 Endurance Group ID: 1 00:09:20.182 Initialization complete. 00:09:20.182 00:09:20.182 ================================== 00:09:20.182 == FDP tests for Namespace: #01 == 00:09:20.182 ================================== 00:09:20.182 00:09:20.182 Get Feature: FDP: 00:09:20.182 ================= 00:09:20.182 Enabled: Yes 00:09:20.182 FDP configuration Index: 0 00:09:20.182 00:09:20.182 FDP configurations log page 00:09:20.182 =========================== 00:09:20.182 Number of FDP configurations: 1 00:09:20.182 Version: 0 00:09:20.182 Size: 112 00:09:20.182 FDP Configuration Descriptor: 0 00:09:20.182 Descriptor Size: 96 00:09:20.182 Reclaim Group Identifier format: 2 00:09:20.182 FDP Volatile Write Cache: Not Present 00:09:20.182 FDP Configuration: Valid 00:09:20.182 Vendor Specific Size: 0 00:09:20.182 Number of Reclaim Groups: 2 00:09:20.182 Number of Recalim Unit Handles: 8 00:09:20.182 Max Placement Identifiers: 128 00:09:20.182 Number of Namespaces Suppprted: 256 00:09:20.182 Reclaim unit Nominal Size: 6000000 bytes 00:09:20.182 Estimated Reclaim Unit Time Limit: Not Reported 00:09:20.182 RUH Desc #000: RUH Type: Initially Isolated 00:09:20.182 RUH Desc #001: RUH Type: Initially Isolated 00:09:20.182 RUH Desc #002: RUH Type: Initially Isolated 00:09:20.182 RUH Desc #003: RUH Type: Initially Isolated 00:09:20.182 RUH Desc #004: RUH Type: Initially Isolated 00:09:20.182 RUH Desc #005: RUH Type: Initially Isolated 00:09:20.182 RUH Desc #006: RUH Type: Initially Isolated 00:09:20.182 RUH Desc #007: RUH Type: Initially Isolated 00:09:20.182 00:09:20.182 FDP reclaim unit handle usage log page 00:09:20.182 ====================================== 00:09:20.182 Number of Reclaim Unit Handles: 8 00:09:20.182 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:20.182 RUH Usage Desc #001: RUH Attributes: Unused 00:09:20.182 RUH Usage Desc #002: RUH Attributes: Unused 00:09:20.182 RUH Usage Desc #003: RUH Attributes: Unused 00:09:20.182 RUH Usage Desc #004: RUH Attributes: Unused 00:09:20.182 RUH Usage Desc #005: RUH Attributes: Unused 00:09:20.182 RUH Usage Desc #006: RUH Attributes: Unused 00:09:20.182 RUH Usage Desc #007: RUH Attributes: Unused 00:09:20.182 00:09:20.182 FDP statistics log page 00:09:20.182 ======================= 00:09:20.182 Host bytes with metadata written: 1025937408 00:09:20.182 Media bytes with metadata written: 1026056192 00:09:20.182 Media bytes erased: 0 00:09:20.182 00:09:20.182 FDP Reclaim unit handle status 00:09:20.182 ============================== 00:09:20.182 Number of RUHS descriptors: 2 00:09:20.182 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004d97 00:09:20.182 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:20.182 00:09:20.182 FDP write on placement id: 0 success 00:09:20.182 00:09:20.182 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:20.182 00:09:20.182 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:20.183 00:09:20.183 Get Feature: FDP Events for Placement handle: #0 00:09:20.183 ======================== 00:09:20.183 Number of FDP Events: 6 00:09:20.183 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:20.183 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:20.183 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:20.183 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:20.183 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:20.183 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:20.183 00:09:20.183 FDP events log page 00:09:20.183 =================== 00:09:20.183 Number of FDP events: 1 00:09:20.183 FDP Event #0: 00:09:20.183 Event Type: RU Not Written to Capacity 00:09:20.183 Placement Identifier: Valid 00:09:20.183 NSID: Valid 00:09:20.183 Location: Valid 00:09:20.183 Placement Identifier: 0 00:09:20.183 Event Timestamp: f 00:09:20.183 Namespace Identifier: 1 00:09:20.183 Reclaim Group Identifier: 0 00:09:20.183 Reclaim Unit Handle Identifier: 0 00:09:20.183 00:09:20.183 FDP test passed 00:09:20.183 00:09:20.183 real 0m0.246s 00:09:20.183 user 0m0.085s 00:09:20.183 sys 0m0.059s 00:09:20.183 12:21:27 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.183 ************************************ 00:09:20.183 END TEST nvme_flexible_data_placement 00:09:20.183 ************************************ 00:09:20.183 12:21:27 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:20.444 ************************************ 00:09:20.444 END TEST nvme_fdp 00:09:20.444 ************************************ 00:09:20.444 00:09:20.444 real 0m7.773s 00:09:20.444 user 0m1.114s 00:09:20.444 sys 0m1.366s 00:09:20.444 12:21:27 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.444 12:21:27 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:20.444 12:21:27 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:20.444 12:21:27 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:20.444 12:21:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:20.444 12:21:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.444 12:21:27 -- common/autotest_common.sh@10 -- # set +x 00:09:20.444 ************************************ 00:09:20.444 START TEST nvme_rpc 00:09:20.444 ************************************ 00:09:20.444 12:21:27 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:20.444 * Looking for test storage... 00:09:20.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:20.444 12:21:27 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:20.444 12:21:27 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:20.444 12:21:27 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:20.444 12:21:27 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.444 12:21:27 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:20.444 12:21:27 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.444 12:21:27 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:20.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.444 --rc genhtml_branch_coverage=1 00:09:20.444 --rc genhtml_function_coverage=1 00:09:20.444 --rc genhtml_legend=1 00:09:20.444 --rc geninfo_all_blocks=1 00:09:20.444 --rc geninfo_unexecuted_blocks=1 00:09:20.444 00:09:20.444 ' 00:09:20.444 12:21:27 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:20.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.444 --rc genhtml_branch_coverage=1 00:09:20.444 --rc genhtml_function_coverage=1 00:09:20.444 --rc genhtml_legend=1 00:09:20.444 --rc geninfo_all_blocks=1 00:09:20.444 --rc geninfo_unexecuted_blocks=1 00:09:20.444 00:09:20.444 ' 00:09:20.444 12:21:27 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:20.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.444 --rc genhtml_branch_coverage=1 00:09:20.444 --rc genhtml_function_coverage=1 00:09:20.444 --rc genhtml_legend=1 00:09:20.444 --rc geninfo_all_blocks=1 00:09:20.444 --rc geninfo_unexecuted_blocks=1 00:09:20.444 00:09:20.444 ' 00:09:20.444 12:21:27 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:20.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.444 --rc genhtml_branch_coverage=1 00:09:20.444 --rc genhtml_function_coverage=1 00:09:20.444 --rc genhtml_legend=1 00:09:20.444 --rc geninfo_all_blocks=1 00:09:20.444 --rc geninfo_unexecuted_blocks=1 00:09:20.444 00:09:20.444 ' 00:09:20.444 12:21:27 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:20.444 12:21:27 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:20.444 12:21:27 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:20.444 12:21:27 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:20.444 12:21:27 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:20.444 12:21:27 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:20.444 12:21:27 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:20.444 12:21:27 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:20.444 12:21:27 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:20.445 12:21:27 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:20.445 12:21:27 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:20.705 12:21:27 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:20.705 12:21:27 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:20.705 12:21:27 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:20.705 12:21:27 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:20.705 12:21:27 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67487 00:09:20.705 12:21:27 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:20.705 12:21:27 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67487 00:09:20.705 12:21:27 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:20.705 12:21:27 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67487 ']' 00:09:20.705 12:21:27 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.705 12:21:27 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.705 12:21:27 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.705 12:21:27 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.705 12:21:27 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.705 [2024-12-16 12:21:27.658870] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:20.705 [2024-12-16 12:21:27.658991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67487 ] 00:09:20.965 [2024-12-16 12:21:27.816587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:20.965 [2024-12-16 12:21:27.913914] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.965 [2024-12-16 12:21:27.913986] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.534 12:21:28 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.534 12:21:28 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:21.534 12:21:28 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:21.794 Nvme0n1 00:09:21.794 12:21:28 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:21.794 12:21:28 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:22.054 request: 00:09:22.054 { 00:09:22.054 "bdev_name": "Nvme0n1", 00:09:22.054 "filename": "non_existing_file", 00:09:22.054 "method": "bdev_nvme_apply_firmware", 00:09:22.054 "req_id": 1 00:09:22.054 } 00:09:22.054 Got JSON-RPC error response 00:09:22.054 response: 00:09:22.054 { 00:09:22.054 "code": -32603, 00:09:22.054 "message": "open file failed." 00:09:22.054 } 00:09:22.054 12:21:28 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:22.054 12:21:28 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:22.054 12:21:28 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:22.315 12:21:29 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:22.315 12:21:29 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67487 00:09:22.315 12:21:29 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67487 ']' 00:09:22.315 12:21:29 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67487 00:09:22.315 12:21:29 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:22.315 12:21:29 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.315 12:21:29 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67487 00:09:22.315 12:21:29 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:22.315 12:21:29 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:22.315 killing process with pid 67487 00:09:22.315 12:21:29 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67487' 00:09:22.315 12:21:29 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67487 00:09:22.315 12:21:29 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67487 00:09:23.696 00:09:23.696 real 0m3.224s 00:09:23.696 user 0m6.155s 00:09:23.696 sys 0m0.509s 00:09:23.696 12:21:30 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.696 ************************************ 00:09:23.696 END TEST nvme_rpc 00:09:23.696 ************************************ 00:09:23.696 12:21:30 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.696 12:21:30 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:23.696 12:21:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:23.696 12:21:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.696 12:21:30 -- common/autotest_common.sh@10 -- # set +x 00:09:23.696 ************************************ 00:09:23.696 START TEST nvme_rpc_timeouts 00:09:23.696 ************************************ 00:09:23.696 12:21:30 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:23.696 * Looking for test storage... 00:09:23.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:23.696 12:21:30 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:23.696 12:21:30 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:09:23.696 12:21:30 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:23.696 12:21:30 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:23.696 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.696 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.696 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.696 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.696 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.696 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.696 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.696 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.696 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.696 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.696 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.696 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:23.696 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:23.696 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.696 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.954 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:23.954 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:23.954 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.954 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:23.954 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.954 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:23.954 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:23.954 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.954 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:23.954 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.954 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.954 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.954 12:21:30 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:23.954 12:21:30 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.954 12:21:30 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:23.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.954 --rc genhtml_branch_coverage=1 00:09:23.954 --rc genhtml_function_coverage=1 00:09:23.954 --rc genhtml_legend=1 00:09:23.954 --rc geninfo_all_blocks=1 00:09:23.954 --rc geninfo_unexecuted_blocks=1 00:09:23.954 00:09:23.954 ' 00:09:23.954 12:21:30 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:23.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.954 --rc genhtml_branch_coverage=1 00:09:23.954 --rc genhtml_function_coverage=1 00:09:23.954 --rc genhtml_legend=1 00:09:23.954 --rc geninfo_all_blocks=1 00:09:23.954 --rc geninfo_unexecuted_blocks=1 00:09:23.954 00:09:23.954 ' 00:09:23.954 12:21:30 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:23.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.955 --rc genhtml_branch_coverage=1 00:09:23.955 --rc genhtml_function_coverage=1 00:09:23.955 --rc genhtml_legend=1 00:09:23.955 --rc geninfo_all_blocks=1 00:09:23.955 --rc geninfo_unexecuted_blocks=1 00:09:23.955 00:09:23.955 ' 00:09:23.955 12:21:30 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:23.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.955 --rc genhtml_branch_coverage=1 00:09:23.955 --rc genhtml_function_coverage=1 00:09:23.955 --rc genhtml_legend=1 00:09:23.955 --rc geninfo_all_blocks=1 00:09:23.955 --rc geninfo_unexecuted_blocks=1 00:09:23.955 00:09:23.955 ' 00:09:23.955 12:21:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:23.955 12:21:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67552 00:09:23.955 12:21:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67552 00:09:23.955 12:21:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67584 00:09:23.955 12:21:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:23.955 12:21:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67584 00:09:23.955 12:21:30 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67584 ']' 00:09:23.955 12:21:30 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.955 12:21:30 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.955 12:21:30 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.955 12:21:30 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.955 12:21:30 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:23.955 12:21:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:23.955 [2024-12-16 12:21:30.889093] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:23.955 [2024-12-16 12:21:30.889217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67584 ] 00:09:23.955 [2024-12-16 12:21:31.043031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:24.212 [2024-12-16 12:21:31.121182] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.212 [2024-12-16 12:21:31.121219] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.778 Checking default timeout settings: 00:09:24.778 12:21:31 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.778 12:21:31 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:24.778 12:21:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:24.778 12:21:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:25.036 Making settings changes with rpc: 00:09:25.036 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:25.036 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:25.294 Check default vs. modified settings: 00:09:25.294 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:25.294 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67552 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67552 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:25.552 Setting action_on_timeout is changed as expected. 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67552 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67552 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:25.552 Setting timeout_us is changed as expected. 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67552 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67552 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:25.552 Setting timeout_admin_us is changed as expected. 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67552 /tmp/settings_modified_67552 00:09:25.552 12:21:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67584 00:09:25.552 12:21:32 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67584 ']' 00:09:25.552 12:21:32 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67584 00:09:25.552 12:21:32 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:25.552 12:21:32 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.552 12:21:32 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67584 00:09:25.552 killing process with pid 67584 00:09:25.552 12:21:32 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.552 12:21:32 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.552 12:21:32 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67584' 00:09:25.552 12:21:32 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67584 00:09:25.552 12:21:32 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67584 00:09:26.928 RPC TIMEOUT SETTING TEST PASSED. 00:09:26.928 12:21:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:26.928 ************************************ 00:09:26.928 END TEST nvme_rpc_timeouts 00:09:26.928 ************************************ 00:09:26.928 00:09:26.928 real 0m3.117s 00:09:26.928 user 0m6.098s 00:09:26.928 sys 0m0.463s 00:09:26.928 12:21:33 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.928 12:21:33 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:26.928 12:21:33 -- spdk/autotest.sh@239 -- # uname -s 00:09:26.928 12:21:33 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:26.928 12:21:33 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:26.928 12:21:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:26.928 12:21:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.928 12:21:33 -- common/autotest_common.sh@10 -- # set +x 00:09:26.928 ************************************ 00:09:26.928 START TEST sw_hotplug 00:09:26.928 ************************************ 00:09:26.928 12:21:33 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:26.928 * Looking for test storage... 00:09:26.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:26.928 12:21:33 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:26.928 12:21:33 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:09:26.928 12:21:33 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:26.928 12:21:33 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:26.928 12:21:33 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.928 12:21:33 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.928 12:21:33 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.928 12:21:33 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.928 12:21:33 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.928 12:21:33 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.928 12:21:33 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.928 12:21:33 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.928 12:21:33 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.928 12:21:33 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.928 12:21:33 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.928 12:21:33 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:26.928 12:21:33 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:26.928 12:21:33 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.928 12:21:33 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.928 12:21:33 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:26.928 12:21:34 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:26.928 12:21:34 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.928 12:21:34 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:26.928 12:21:34 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.928 12:21:34 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:26.928 12:21:34 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:26.928 12:21:34 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.928 12:21:34 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:26.928 12:21:34 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.928 12:21:34 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.928 12:21:34 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.928 12:21:34 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:26.928 12:21:34 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.928 12:21:34 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:26.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.928 --rc genhtml_branch_coverage=1 00:09:26.928 --rc genhtml_function_coverage=1 00:09:26.928 --rc genhtml_legend=1 00:09:26.928 --rc geninfo_all_blocks=1 00:09:26.928 --rc geninfo_unexecuted_blocks=1 00:09:26.928 00:09:26.928 ' 00:09:26.928 12:21:34 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:26.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.928 --rc genhtml_branch_coverage=1 00:09:26.928 --rc genhtml_function_coverage=1 00:09:26.928 --rc genhtml_legend=1 00:09:26.928 --rc geninfo_all_blocks=1 00:09:26.928 --rc geninfo_unexecuted_blocks=1 00:09:26.928 00:09:26.928 ' 00:09:26.928 12:21:34 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:26.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.928 --rc genhtml_branch_coverage=1 00:09:26.928 --rc genhtml_function_coverage=1 00:09:26.928 --rc genhtml_legend=1 00:09:26.928 --rc geninfo_all_blocks=1 00:09:26.928 --rc geninfo_unexecuted_blocks=1 00:09:26.928 00:09:26.928 ' 00:09:26.928 12:21:34 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:26.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.928 --rc genhtml_branch_coverage=1 00:09:26.928 --rc genhtml_function_coverage=1 00:09:26.928 --rc genhtml_legend=1 00:09:26.928 --rc geninfo_all_blocks=1 00:09:26.928 --rc geninfo_unexecuted_blocks=1 00:09:26.928 00:09:26.928 ' 00:09:26.928 12:21:34 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:27.189 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:27.450 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:27.450 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:27.450 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:27.450 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:27.450 12:21:34 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:27.450 12:21:34 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:27.450 12:21:34 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:27.450 12:21:34 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:27.450 12:21:34 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:27.450 12:21:34 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:27.450 12:21:34 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:27.450 12:21:34 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:27.450 12:21:34 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:27.450 12:21:34 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:27.450 12:21:34 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:27.450 12:21:34 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:27.450 12:21:34 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:27.450 12:21:34 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:27.450 12:21:34 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:27.450 12:21:34 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:27.450 12:21:34 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:27.450 12:21:34 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:27.450 12:21:34 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:27.450 12:21:34 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:27.450 12:21:34 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:27.450 12:21:34 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:27.450 12:21:34 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:27.450 12:21:34 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:27.450 12:21:34 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:27.451 12:21:34 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:27.451 12:21:34 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:27.451 12:21:34 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:27.451 12:21:34 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:27.712 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:27.972 Waiting for block devices as requested 00:09:27.972 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.972 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:28.232 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:28.232 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:33.552 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:33.552 12:21:40 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:33.552 12:21:40 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:33.552 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:33.813 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:33.813 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:34.073 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:34.334 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:34.334 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:34.334 12:21:41 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:34.334 12:21:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:34.334 12:21:41 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:34.334 12:21:41 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:34.334 12:21:41 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68442 00:09:34.334 12:21:41 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:34.334 12:21:41 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:34.334 12:21:41 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:34.334 12:21:41 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:34.334 12:21:41 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:34.334 12:21:41 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:34.334 12:21:41 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:34.334 12:21:41 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:34.334 12:21:41 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:34.334 12:21:41 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:34.334 12:21:41 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:34.334 12:21:41 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:34.334 12:21:41 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:34.334 12:21:41 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:34.596 Initializing NVMe Controllers 00:09:34.596 Attaching to 0000:00:10.0 00:09:34.596 Attaching to 0000:00:11.0 00:09:34.596 Attached to 0000:00:11.0 00:09:34.596 Attached to 0000:00:10.0 00:09:34.596 Initialization complete. Starting I/O... 00:09:34.596 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:34.596 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:34.596 00:09:35.539 QEMU NVMe Ctrl (12341 ): 2656 I/Os completed (+2656) 00:09:35.539 QEMU NVMe Ctrl (12340 ): 2656 I/Os completed (+2656) 00:09:35.539 00:09:36.482 QEMU NVMe Ctrl (12341 ): 5876 I/Os completed (+3220) 00:09:36.482 QEMU NVMe Ctrl (12340 ): 5879 I/Os completed (+3223) 00:09:36.482 00:09:37.870 QEMU NVMe Ctrl (12341 ): 9105 I/Os completed (+3229) 00:09:37.870 QEMU NVMe Ctrl (12340 ): 9110 I/Os completed (+3231) 00:09:37.870 00:09:38.442 QEMU NVMe Ctrl (12341 ): 12385 I/Os completed (+3280) 00:09:38.442 QEMU NVMe Ctrl (12340 ): 12390 I/Os completed (+3280) 00:09:38.442 00:09:39.819 QEMU NVMe Ctrl (12341 ): 16014 I/Os completed (+3629) 00:09:39.819 QEMU NVMe Ctrl (12340 ): 16016 I/Os completed (+3626) 00:09:39.819 00:09:40.385 12:21:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:40.385 12:21:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:40.385 12:21:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:40.385 [2024-12-16 12:21:47.346116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:40.385 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:40.385 [2024-12-16 12:21:47.347279] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:40.385 [2024-12-16 12:21:47.347375] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:40.385 [2024-12-16 12:21:47.347404] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:40.385 [2024-12-16 12:21:47.347442] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:40.385 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:40.385 [2024-12-16 12:21:47.348948] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:40.385 [2024-12-16 12:21:47.349046] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:40.385 [2024-12-16 12:21:47.349110] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:40.385 [2024-12-16 12:21:47.349135] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:40.385 12:21:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:40.385 12:21:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:40.385 [2024-12-16 12:21:47.370013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:40.385 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:40.385 [2024-12-16 12:21:47.371185] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:40.385 [2024-12-16 12:21:47.371296] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:40.385 [2024-12-16 12:21:47.371329] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:40.385 [2024-12-16 12:21:47.371387] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:40.385 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:40.385 [2024-12-16 12:21:47.372942] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:40.385 [2024-12-16 12:21:47.373031] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:40.385 [2024-12-16 12:21:47.373059] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:40.385 [2024-12-16 12:21:47.373115] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:40.385 12:21:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:40.385 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:40.385 EAL: Scan for (pci) bus failed. 00:09:40.385 12:21:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:40.385 12:21:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:40.385 12:21:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:40.385 12:21:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:40.643 12:21:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:40.643 12:21:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:40.643 12:21:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:40.643 12:21:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:40.643 Attaching to 0000:00:10.0 00:09:40.643 12:21:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:40.643 Attached to 0000:00:10.0 00:09:40.643 QEMU NVMe Ctrl (12340 ): 4 I/Os completed (+4) 00:09:40.643 00:09:40.643 12:21:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:40.643 12:21:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:40.643 12:21:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:40.643 Attaching to 0000:00:11.0 00:09:40.643 Attached to 0000:00:11.0 00:09:41.576 QEMU NVMe Ctrl (12340 ): 3799 I/Os completed (+3795) 00:09:41.576 QEMU NVMe Ctrl (12341 ): 3482 I/Os completed (+3482) 00:09:41.576 00:09:42.510 QEMU NVMe Ctrl (12340 ): 7559 I/Os completed (+3760) 00:09:42.510 QEMU NVMe Ctrl (12341 ): 7245 I/Os completed (+3763) 00:09:42.510 00:09:43.450 QEMU NVMe Ctrl (12340 ): 11108 I/Os completed (+3549) 00:09:43.450 QEMU NVMe Ctrl (12341 ): 10788 I/Os completed (+3543) 00:09:43.450 00:09:44.829 QEMU NVMe Ctrl (12340 ): 14457 I/Os completed (+3349) 00:09:44.829 QEMU NVMe Ctrl (12341 ): 14149 I/Os completed (+3361) 00:09:44.829 00:09:45.762 QEMU NVMe Ctrl (12340 ): 18142 I/Os completed (+3685) 00:09:45.762 QEMU NVMe Ctrl (12341 ): 17854 I/Os completed (+3705) 00:09:45.762 00:09:46.752 QEMU NVMe Ctrl (12340 ): 21721 I/Os completed (+3579) 00:09:46.752 QEMU NVMe Ctrl (12341 ): 21457 I/Os completed (+3603) 00:09:46.752 00:09:47.697 QEMU NVMe Ctrl (12340 ): 24745 I/Os completed (+3024) 00:09:47.697 QEMU NVMe Ctrl (12341 ): 24489 I/Os completed (+3032) 00:09:47.697 00:09:48.639 QEMU NVMe Ctrl (12340 ): 27505 I/Os completed (+2760) 00:09:48.639 QEMU NVMe Ctrl (12341 ): 27254 I/Os completed (+2765) 00:09:48.639 00:09:49.585 QEMU NVMe Ctrl (12340 ): 30301 I/Os completed (+2796) 00:09:49.585 QEMU NVMe Ctrl (12341 ): 30062 I/Os completed (+2808) 00:09:49.585 00:09:50.528 QEMU NVMe Ctrl (12340 ): 33113 I/Os completed (+2812) 00:09:50.528 QEMU NVMe Ctrl (12341 ): 32874 I/Os completed (+2812) 00:09:50.528 00:09:51.469 QEMU NVMe Ctrl (12340 ): 35797 I/Os completed (+2684) 00:09:51.469 QEMU NVMe Ctrl (12341 ): 35575 I/Os completed (+2701) 00:09:51.469 00:09:52.858 QEMU NVMe Ctrl (12340 ): 38426 I/Os completed (+2629) 00:09:52.858 QEMU NVMe Ctrl (12341 ): 38225 I/Os completed (+2650) 00:09:52.858 00:09:52.858 12:21:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:52.858 12:21:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:52.858 12:21:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:52.858 12:21:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:52.858 [2024-12-16 12:21:59.628857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:52.858 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:52.858 [2024-12-16 12:21:59.630570] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.858 [2024-12-16 12:21:59.630687] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.858 [2024-12-16 12:21:59.630724] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.858 [2024-12-16 12:21:59.630837] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.858 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:52.858 [2024-12-16 12:21:59.633109] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.858 [2024-12-16 12:21:59.633208] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.858 [2024-12-16 12:21:59.633228] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.858 [2024-12-16 12:21:59.633245] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.858 12:21:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:52.858 12:21:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:52.858 [2024-12-16 12:21:59.650747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:52.858 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:52.858 [2024-12-16 12:21:59.651967] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.858 [2024-12-16 12:21:59.652025] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.858 [2024-12-16 12:21:59.652048] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.858 [2024-12-16 12:21:59.652063] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.858 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:52.858 [2024-12-16 12:21:59.654109] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.858 [2024-12-16 12:21:59.654197] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.858 [2024-12-16 12:21:59.654217] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.858 [2024-12-16 12:21:59.654234] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.858 12:21:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:52.859 12:21:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:52.859 12:21:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:52.859 12:21:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:52.859 12:21:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:52.859 12:21:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:52.859 12:21:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:52.859 12:21:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:52.859 12:21:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:52.859 12:21:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:52.859 Attaching to 0000:00:10.0 00:09:52.859 Attached to 0000:00:10.0 00:09:52.859 12:21:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:52.859 12:21:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:52.859 12:21:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:52.859 Attaching to 0000:00:11.0 00:09:53.121 Attached to 0000:00:11.0 00:09:53.693 QEMU NVMe Ctrl (12340 ): 1756 I/Os completed (+1756) 00:09:53.693 QEMU NVMe Ctrl (12341 ): 1539 I/Os completed (+1539) 00:09:53.693 00:09:54.637 QEMU NVMe Ctrl (12340 ): 4500 I/Os completed (+2744) 00:09:54.637 QEMU NVMe Ctrl (12341 ): 4283 I/Os completed (+2744) 00:09:54.637 00:09:55.580 QEMU NVMe Ctrl (12340 ): 7184 I/Os completed (+2684) 00:09:55.580 QEMU NVMe Ctrl (12341 ): 6972 I/Os completed (+2689) 00:09:55.580 00:09:56.524 QEMU NVMe Ctrl (12340 ): 9887 I/Os completed (+2703) 00:09:56.524 QEMU NVMe Ctrl (12341 ): 9672 I/Os completed (+2700) 00:09:56.524 00:09:57.467 QEMU NVMe Ctrl (12340 ): 12567 I/Os completed (+2680) 00:09:57.467 QEMU NVMe Ctrl (12341 ): 12352 I/Os completed (+2680) 00:09:57.467 00:09:58.839 QEMU NVMe Ctrl (12340 ): 16299 I/Os completed (+3732) 00:09:58.839 QEMU NVMe Ctrl (12341 ): 16075 I/Os completed (+3723) 00:09:58.839 00:09:59.773 QEMU NVMe Ctrl (12340 ): 20075 I/Os completed (+3776) 00:09:59.773 QEMU NVMe Ctrl (12341 ): 19859 I/Os completed (+3784) 00:09:59.773 00:10:00.709 QEMU NVMe Ctrl (12340 ): 23786 I/Os completed (+3711) 00:10:00.709 QEMU NVMe Ctrl (12341 ): 23612 I/Os completed (+3753) 00:10:00.709 00:10:01.652 QEMU NVMe Ctrl (12340 ): 26618 I/Os completed (+2832) 00:10:01.652 QEMU NVMe Ctrl (12341 ): 26449 I/Os completed (+2837) 00:10:01.652 00:10:02.596 QEMU NVMe Ctrl (12340 ): 29426 I/Os completed (+2808) 00:10:02.596 QEMU NVMe Ctrl (12341 ): 29257 I/Os completed (+2808) 00:10:02.596 00:10:03.604 QEMU NVMe Ctrl (12340 ): 32130 I/Os completed (+2704) 00:10:03.604 QEMU NVMe Ctrl (12341 ): 31968 I/Os completed (+2711) 00:10:03.604 00:10:04.546 QEMU NVMe Ctrl (12340 ): 34890 I/Os completed (+2760) 00:10:04.546 QEMU NVMe Ctrl (12341 ): 34728 I/Os completed (+2760) 00:10:04.546 00:10:05.117 12:22:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:05.117 12:22:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:05.117 12:22:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:05.117 12:22:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:05.117 [2024-12-16 12:22:11.968120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:05.117 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:05.117 [2024-12-16 12:22:11.969754] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.117 [2024-12-16 12:22:11.969960] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.117 [2024-12-16 12:22:11.970085] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.117 [2024-12-16 12:22:11.970125] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.117 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:05.117 [2024-12-16 12:22:11.972713] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.117 [2024-12-16 12:22:11.973331] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.117 [2024-12-16 12:22:11.973398] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.117 [2024-12-16 12:22:11.973430] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.117 12:22:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:05.117 12:22:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:05.117 [2024-12-16 12:22:11.993742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:05.117 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:05.117 [2024-12-16 12:22:11.995111] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.117 [2024-12-16 12:22:11.995286] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.118 [2024-12-16 12:22:11.995329] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.118 [2024-12-16 12:22:11.995392] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.118 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:05.118 [2024-12-16 12:22:11.997392] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.118 [2024-12-16 12:22:11.997531] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.118 [2024-12-16 12:22:11.997573] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.118 [2024-12-16 12:22:11.997633] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.118 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:05.118 12:22:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:05.118 EAL: Scan for (pci) bus failed. 00:10:05.118 12:22:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:05.118 12:22:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:05.118 12:22:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:05.118 12:22:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:05.118 12:22:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:05.118 12:22:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:05.118 12:22:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:05.118 12:22:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:05.118 12:22:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:05.118 Attaching to 0000:00:10.0 00:10:05.118 Attached to 0000:00:10.0 00:10:05.378 12:22:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:05.378 12:22:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:05.378 12:22:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:05.378 Attaching to 0000:00:11.0 00:10:05.378 Attached to 0000:00:11.0 00:10:05.378 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:05.378 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:05.378 [2024-12-16 12:22:12.298108] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:17.605 12:22:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:17.605 12:22:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:17.605 12:22:24 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.95 00:10:17.605 12:22:24 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.95 00:10:17.605 12:22:24 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:17.605 12:22:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.95 00:10:17.605 12:22:24 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.95 2 00:10:17.605 remove_attach_helper took 42.95s to complete (handling 2 nvme drive(s)) 12:22:24 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:24.199 12:22:30 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68442 00:10:24.199 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68442) - No such process 00:10:24.199 12:22:30 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68442 00:10:24.199 12:22:30 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:24.199 12:22:30 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:24.199 12:22:30 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:24.199 12:22:30 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68992 00:10:24.199 12:22:30 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:24.199 12:22:30 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:24.199 12:22:30 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68992 00:10:24.199 12:22:30 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68992 ']' 00:10:24.199 12:22:30 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.199 12:22:30 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.199 12:22:30 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.199 12:22:30 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.199 12:22:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:24.199 [2024-12-16 12:22:30.378011] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:24.199 [2024-12-16 12:22:30.378130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68992 ] 00:10:24.199 [2024-12-16 12:22:30.535519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.199 [2024-12-16 12:22:30.630997] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.199 12:22:31 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.199 12:22:31 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:24.199 12:22:31 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:24.199 12:22:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.199 12:22:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:24.199 12:22:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.200 12:22:31 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:24.200 12:22:31 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:24.200 12:22:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:24.200 12:22:31 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:24.200 12:22:31 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:24.200 12:22:31 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:24.200 12:22:31 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:24.200 12:22:31 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:24.200 12:22:31 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:24.200 12:22:31 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:24.200 12:22:31 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:24.200 12:22:31 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:24.200 12:22:31 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:30.790 12:22:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:30.790 12:22:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:30.790 12:22:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:30.790 12:22:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:30.790 12:22:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:30.790 12:22:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:30.790 12:22:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:30.790 12:22:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:30.790 12:22:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:30.790 12:22:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:30.790 12:22:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:30.790 12:22:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.790 12:22:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:30.790 12:22:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.790 12:22:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:30.790 12:22:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:30.790 [2024-12-16 12:22:37.318188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:30.790 [2024-12-16 12:22:37.319582] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.790 [2024-12-16 12:22:37.319621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:30.790 [2024-12-16 12:22:37.319638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:30.790 [2024-12-16 12:22:37.319663] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.790 [2024-12-16 12:22:37.319675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:30.790 [2024-12-16 12:22:37.319689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:30.790 [2024-12-16 12:22:37.319700] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.790 [2024-12-16 12:22:37.319712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:30.790 [2024-12-16 12:22:37.319723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:30.790 [2024-12-16 12:22:37.319740] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.790 [2024-12-16 12:22:37.319751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:30.790 [2024-12-16 12:22:37.319764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:30.790 12:22:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:30.790 12:22:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:30.790 12:22:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:30.790 12:22:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:30.790 12:22:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:30.790 12:22:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:30.790 12:22:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.790 12:22:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:30.790 12:22:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.790 12:22:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:30.790 12:22:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:31.049 [2024-12-16 12:22:37.918174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:31.049 [2024-12-16 12:22:37.919439] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.049 [2024-12-16 12:22:37.919473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:31.049 [2024-12-16 12:22:37.919490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:31.049 [2024-12-16 12:22:37.919510] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.049 [2024-12-16 12:22:37.919524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:31.049 [2024-12-16 12:22:37.919536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:31.049 [2024-12-16 12:22:37.919549] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.049 [2024-12-16 12:22:37.919560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:31.049 [2024-12-16 12:22:37.919573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:31.049 [2024-12-16 12:22:37.919585] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.049 [2024-12-16 12:22:37.919598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:31.049 [2024-12-16 12:22:37.919609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:31.308 12:22:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:31.308 12:22:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:31.308 12:22:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:31.308 12:22:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:31.308 12:22:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:31.308 12:22:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:31.308 12:22:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.308 12:22:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:31.308 12:22:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.308 12:22:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:31.308 12:22:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:31.566 12:22:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:31.566 12:22:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:31.566 12:22:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:31.566 12:22:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:31.566 12:22:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:31.566 12:22:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:31.566 12:22:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:31.566 12:22:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:31.566 12:22:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:31.566 12:22:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:31.566 12:22:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:43.766 12:22:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:43.766 12:22:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:43.766 12:22:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:43.766 12:22:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:43.766 12:22:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:43.766 12:22:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:43.766 12:22:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.766 12:22:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:43.766 12:22:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.766 12:22:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:43.766 12:22:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:43.766 12:22:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:43.766 12:22:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:43.766 12:22:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:43.766 12:22:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:43.766 12:22:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:43.766 12:22:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:43.766 12:22:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:43.766 12:22:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:43.766 12:22:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:43.766 12:22:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:43.766 12:22:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.766 12:22:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:43.766 [2024-12-16 12:22:50.718384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:43.766 [2024-12-16 12:22:50.719640] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.766 [2024-12-16 12:22:50.719739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:43.766 [2024-12-16 12:22:50.719802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.766 [2024-12-16 12:22:50.719856] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.766 [2024-12-16 12:22:50.719875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:43.766 [2024-12-16 12:22:50.719925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.766 [2024-12-16 12:22:50.719973] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.766 [2024-12-16 12:22:50.719990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:43.766 [2024-12-16 12:22:50.720013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.766 [2024-12-16 12:22:50.720083] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.766 [2024-12-16 12:22:50.720104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:43.766 [2024-12-16 12:22:50.720129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.766 12:22:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.766 12:22:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:43.766 12:22:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:44.025 [2024-12-16 12:22:51.118379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:44.025 [2024-12-16 12:22:51.119612] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.025 [2024-12-16 12:22:51.119642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.025 [2024-12-16 12:22:51.119654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.025 [2024-12-16 12:22:51.119666] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.025 [2024-12-16 12:22:51.119675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.025 [2024-12-16 12:22:51.119682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.025 [2024-12-16 12:22:51.119690] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.025 [2024-12-16 12:22:51.119696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.025 [2024-12-16 12:22:51.119704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.025 [2024-12-16 12:22:51.119711] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.025 [2024-12-16 12:22:51.119719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.025 [2024-12-16 12:22:51.119725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.283 12:22:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:44.283 12:22:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:44.283 12:22:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:44.283 12:22:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:44.283 12:22:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:44.283 12:22:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:44.283 12:22:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.283 12:22:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:44.283 12:22:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.283 12:22:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:44.283 12:22:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:44.283 12:22:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:44.283 12:22:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:44.283 12:22:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:44.540 12:22:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:44.540 12:22:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:44.540 12:22:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:44.540 12:22:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:44.540 12:22:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:44.540 12:22:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:44.540 12:22:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:44.540 12:22:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:56.741 12:23:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:56.741 12:23:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:56.741 12:23:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:56.741 12:23:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:56.741 12:23:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:56.741 12:23:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:56.741 12:23:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.741 12:23:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:56.741 12:23:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.741 12:23:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:56.741 12:23:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:56.741 12:23:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:56.741 12:23:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:56.741 12:23:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:56.741 12:23:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:56.741 12:23:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:56.741 12:23:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:56.741 12:23:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:56.741 12:23:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:56.741 12:23:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:56.741 12:23:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:56.741 12:23:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.741 12:23:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:56.741 [2024-12-16 12:23:03.618604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:56.741 [2024-12-16 12:23:03.619854] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.741 [2024-12-16 12:23:03.619952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:56.741 [2024-12-16 12:23:03.620373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:56.741 [2024-12-16 12:23:03.620528] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.741 [2024-12-16 12:23:03.620623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:56.741 [2024-12-16 12:23:03.620703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:56.741 [2024-12-16 12:23:03.620889] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.741 [2024-12-16 12:23:03.621026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:56.741 12:23:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.741 [2024-12-16 12:23:03.621285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:56.741 [2024-12-16 12:23:03.621570] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.741 [2024-12-16 12:23:03.621727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:56.741 [2024-12-16 12:23:03.621919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:56.741 12:23:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:56.741 12:23:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:57.313 [2024-12-16 12:23:04.118642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:57.313 [2024-12-16 12:23:04.120527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.313 [2024-12-16 12:23:04.120707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.313 [2024-12-16 12:23:04.121218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.313 [2024-12-16 12:23:04.121779] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.313 [2024-12-16 12:23:04.121816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.313 [2024-12-16 12:23:04.121830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.313 [2024-12-16 12:23:04.121844] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.313 [2024-12-16 12:23:04.121853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.313 [2024-12-16 12:23:04.121868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.313 [2024-12-16 12:23:04.121878] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.313 [2024-12-16 12:23:04.121889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.313 [2024-12-16 12:23:04.121898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.313 12:23:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:57.313 12:23:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:57.313 12:23:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:57.313 12:23:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:57.313 12:23:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:57.313 12:23:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:57.313 12:23:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.313 12:23:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:57.313 12:23:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.313 12:23:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:57.313 12:23:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:57.313 12:23:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:57.313 12:23:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:57.313 12:23:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:57.313 12:23:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:57.313 12:23:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:57.313 12:23:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:57.313 12:23:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:57.313 12:23:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:57.572 12:23:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:57.572 12:23:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:57.572 12:23:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:09.776 12:23:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:09.776 12:23:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:09.776 12:23:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:09.776 12:23:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:09.776 12:23:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:09.776 12:23:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:09.776 12:23:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.776 12:23:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:09.776 12:23:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.776 12:23:16 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:09.776 12:23:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:09.776 12:23:16 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.29 00:11:09.776 12:23:16 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.29 00:11:09.776 12:23:16 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:09.776 12:23:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.29 00:11:09.776 12:23:16 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.29 2 00:11:09.776 remove_attach_helper took 45.29s to complete (handling 2 nvme drive(s)) 12:23:16 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:09.776 12:23:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.776 12:23:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:09.776 12:23:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.776 12:23:16 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:09.776 12:23:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.776 12:23:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:09.776 12:23:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.776 12:23:16 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:09.776 12:23:16 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:09.776 12:23:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:09.776 12:23:16 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:09.776 12:23:16 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:09.776 12:23:16 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:09.776 12:23:16 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:09.776 12:23:16 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:09.776 12:23:16 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:09.776 12:23:16 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:09.776 12:23:16 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:09.776 12:23:16 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:09.776 12:23:16 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:16.334 12:23:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:16.334 12:23:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:16.334 12:23:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:16.334 12:23:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:16.334 12:23:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:16.334 12:23:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:16.334 12:23:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:16.334 12:23:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:16.334 12:23:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:16.334 12:23:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:16.334 12:23:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.334 12:23:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.334 12:23:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:16.334 12:23:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.334 12:23:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:16.334 12:23:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:16.334 [2024-12-16 12:23:22.638281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:16.334 [2024-12-16 12:23:22.639195] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.335 [2024-12-16 12:23:22.639223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.335 [2024-12-16 12:23:22.639234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.335 [2024-12-16 12:23:22.639252] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.335 [2024-12-16 12:23:22.639259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.335 [2024-12-16 12:23:22.639270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.335 [2024-12-16 12:23:22.639277] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.335 [2024-12-16 12:23:22.639285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.335 [2024-12-16 12:23:22.639291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.335 [2024-12-16 12:23:22.639300] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.335 [2024-12-16 12:23:22.639306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.335 [2024-12-16 12:23:22.639316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.335 [2024-12-16 12:23:23.038280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:16.335 [2024-12-16 12:23:23.039143] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.335 [2024-12-16 12:23:23.039186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.335 [2024-12-16 12:23:23.039197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.335 [2024-12-16 12:23:23.039210] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.335 [2024-12-16 12:23:23.039219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.335 [2024-12-16 12:23:23.039226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.335 [2024-12-16 12:23:23.039235] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.335 [2024-12-16 12:23:23.039242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.335 [2024-12-16 12:23:23.039250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.335 [2024-12-16 12:23:23.039257] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.335 [2024-12-16 12:23:23.039264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.335 [2024-12-16 12:23:23.039271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.335 12:23:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:16.335 12:23:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:16.335 12:23:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:16.335 12:23:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:16.335 12:23:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:16.335 12:23:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:16.335 12:23:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.335 12:23:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.335 12:23:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.335 12:23:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:16.335 12:23:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:16.335 12:23:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:16.335 12:23:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:16.335 12:23:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:16.335 12:23:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:16.335 12:23:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:16.335 12:23:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:16.335 12:23:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:16.335 12:23:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:16.335 12:23:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:16.335 12:23:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:16.335 12:23:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:28.551 12:23:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:28.551 12:23:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:28.551 12:23:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:28.551 12:23:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:28.551 12:23:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:28.551 12:23:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:28.551 12:23:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.551 12:23:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.551 12:23:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.551 12:23:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:28.551 12:23:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:28.551 12:23:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:28.551 12:23:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:28.551 [2024-12-16 12:23:35.438505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:28.551 [2024-12-16 12:23:35.439646] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.551 [2024-12-16 12:23:35.439761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.551 [2024-12-16 12:23:35.439825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.551 [2024-12-16 12:23:35.439887] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.551 [2024-12-16 12:23:35.439905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.551 [2024-12-16 12:23:35.439952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.551 [2024-12-16 12:23:35.439999] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.551 [2024-12-16 12:23:35.440018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.551 [2024-12-16 12:23:35.440061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.551 [2024-12-16 12:23:35.440088] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.551 [2024-12-16 12:23:35.440124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.551 [2024-12-16 12:23:35.440151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.551 12:23:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:28.551 12:23:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:28.551 12:23:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:28.551 12:23:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:28.551 12:23:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:28.551 12:23:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:28.551 12:23:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:28.551 12:23:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:28.551 12:23:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.551 12:23:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.551 12:23:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.551 12:23:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:28.551 12:23:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:29.117 12:23:35 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:29.117 12:23:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:29.117 12:23:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:29.117 12:23:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:29.117 12:23:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:29.117 12:23:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:29.117 12:23:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.117 12:23:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.117 12:23:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.117 12:23:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:29.117 12:23:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:29.117 [2024-12-16 12:23:36.138510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:29.117 [2024-12-16 12:23:36.139489] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.117 [2024-12-16 12:23:36.139591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.117 [2024-12-16 12:23:36.139677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.117 [2024-12-16 12:23:36.139975] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.117 [2024-12-16 12:23:36.140011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.117 [2024-12-16 12:23:36.140082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.117 [2024-12-16 12:23:36.140113] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.117 [2024-12-16 12:23:36.140183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.117 [2024-12-16 12:23:36.140262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.117 [2024-12-16 12:23:36.140363] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.117 [2024-12-16 12:23:36.140402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.117 [2024-12-16 12:23:36.140428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.683 12:23:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:29.683 12:23:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:29.683 12:23:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:29.683 12:23:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:29.683 12:23:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:29.683 12:23:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:29.683 12:23:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.683 12:23:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.683 12:23:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.683 12:23:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:29.683 12:23:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:29.683 12:23:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:29.683 12:23:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:29.683 12:23:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:29.683 12:23:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:29.683 12:23:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:29.683 12:23:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:29.683 12:23:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:29.683 12:23:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:29.941 12:23:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:29.941 12:23:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:29.941 12:23:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:42.145 12:23:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:42.145 12:23:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:42.145 12:23:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:42.145 12:23:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:42.145 12:23:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:42.145 12:23:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:42.145 12:23:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.145 12:23:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.145 12:23:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.145 12:23:48 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:42.145 12:23:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:42.145 12:23:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:42.145 12:23:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:42.145 12:23:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:42.145 12:23:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:42.145 12:23:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:42.145 12:23:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:42.145 12:23:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:42.145 12:23:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:42.145 12:23:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:42.145 12:23:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:42.145 12:23:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.145 12:23:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.145 12:23:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.145 12:23:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:42.145 12:23:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:42.145 [2024-12-16 12:23:48.938744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:42.145 [2024-12-16 12:23:48.939667] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.145 [2024-12-16 12:23:48.939695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.145 [2024-12-16 12:23:48.939705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.145 [2024-12-16 12:23:48.939723] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.145 [2024-12-16 12:23:48.939730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.145 [2024-12-16 12:23:48.939740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.145 [2024-12-16 12:23:48.939747] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.146 [2024-12-16 12:23:48.939757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.146 [2024-12-16 12:23:48.939764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.146 [2024-12-16 12:23:48.939772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.146 [2024-12-16 12:23:48.939778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.146 [2024-12-16 12:23:48.939785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.404 12:23:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:42.404 12:23:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:42.404 12:23:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:42.404 12:23:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:42.404 12:23:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:42.404 12:23:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:42.404 12:23:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.404 12:23:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.404 [2024-12-16 12:23:49.438749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:42.404 [2024-12-16 12:23:49.439612] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.404 [2024-12-16 12:23:49.439642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.404 [2024-12-16 12:23:49.439654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.404 [2024-12-16 12:23:49.439667] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.404 [2024-12-16 12:23:49.439676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.404 [2024-12-16 12:23:49.439683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.404 [2024-12-16 12:23:49.439691] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.404 [2024-12-16 12:23:49.439698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.404 [2024-12-16 12:23:49.439705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.404 [2024-12-16 12:23:49.439712] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.404 [2024-12-16 12:23:49.439721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.404 [2024-12-16 12:23:49.439728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.404 12:23:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.404 12:23:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:42.404 12:23:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:42.969 12:23:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:42.969 12:23:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:42.969 12:23:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:42.969 12:23:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:42.969 12:23:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:42.969 12:23:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:42.969 12:23:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.969 12:23:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.969 12:23:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.969 12:23:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:42.969 12:23:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:42.969 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:42.969 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:42.969 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:43.264 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:43.264 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:43.264 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:43.264 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:43.264 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:43.264 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:43.264 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:43.264 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:55.479 12:24:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:55.479 12:24:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:55.479 12:24:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:55.479 12:24:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:55.479 12:24:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:55.479 12:24:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:55.479 12:24:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.479 12:24:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.479 12:24:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.479 12:24:02 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:55.479 12:24:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:55.479 12:24:02 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.70 00:11:55.479 12:24:02 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.70 00:11:55.479 12:24:02 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:55.479 12:24:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.70 00:11:55.479 12:24:02 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.70 2 00:11:55.479 remove_attach_helper took 45.70s to complete (handling 2 nvme drive(s)) 12:24:02 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:55.479 12:24:02 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68992 00:11:55.479 12:24:02 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68992 ']' 00:11:55.479 12:24:02 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68992 00:11:55.479 12:24:02 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:11:55.479 12:24:02 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:55.479 12:24:02 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68992 00:11:55.479 12:24:02 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:55.479 12:24:02 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:55.479 12:24:02 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68992' 00:11:55.479 killing process with pid 68992 00:11:55.479 12:24:02 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68992 00:11:55.479 12:24:02 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68992 00:11:56.416 12:24:03 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:56.677 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:57.251 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:57.251 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:57.251 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:57.251 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:57.513 00:11:57.513 real 2m30.565s 00:11:57.513 user 1m52.123s 00:11:57.513 sys 0m16.938s 00:11:57.513 ************************************ 00:11:57.513 END TEST sw_hotplug 00:11:57.513 ************************************ 00:11:57.513 12:24:04 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.513 12:24:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:57.513 12:24:04 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:11:57.513 12:24:04 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:57.513 12:24:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:57.513 12:24:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.513 12:24:04 -- common/autotest_common.sh@10 -- # set +x 00:11:57.513 ************************************ 00:11:57.513 START TEST nvme_xnvme 00:11:57.513 ************************************ 00:11:57.513 12:24:04 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:57.513 * Looking for test storage... 00:11:57.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:57.513 12:24:04 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:57.513 12:24:04 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:11:57.513 12:24:04 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:57.513 12:24:04 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:57.513 12:24:04 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:57.513 12:24:04 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:57.513 12:24:04 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:57.513 12:24:04 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:57.513 12:24:04 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:57.513 12:24:04 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:57.513 12:24:04 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:57.513 12:24:04 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:57.513 12:24:04 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:57.513 12:24:04 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:57.513 12:24:04 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:57.513 12:24:04 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:57.513 12:24:04 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:57.513 12:24:04 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:57.513 12:24:04 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:57.513 12:24:04 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:57.513 12:24:04 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:57.513 12:24:04 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:57.513 12:24:04 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:57.513 12:24:04 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:57.777 12:24:04 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:57.777 12:24:04 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:57.777 12:24:04 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:57.777 12:24:04 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:57.777 12:24:04 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:57.777 12:24:04 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:57.777 12:24:04 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:57.777 12:24:04 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:57.777 12:24:04 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:57.777 12:24:04 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:57.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.777 --rc genhtml_branch_coverage=1 00:11:57.777 --rc genhtml_function_coverage=1 00:11:57.777 --rc genhtml_legend=1 00:11:57.777 --rc geninfo_all_blocks=1 00:11:57.777 --rc geninfo_unexecuted_blocks=1 00:11:57.777 00:11:57.777 ' 00:11:57.777 12:24:04 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:57.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.777 --rc genhtml_branch_coverage=1 00:11:57.777 --rc genhtml_function_coverage=1 00:11:57.777 --rc genhtml_legend=1 00:11:57.777 --rc geninfo_all_blocks=1 00:11:57.777 --rc geninfo_unexecuted_blocks=1 00:11:57.777 00:11:57.777 ' 00:11:57.777 12:24:04 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:57.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.777 --rc genhtml_branch_coverage=1 00:11:57.777 --rc genhtml_function_coverage=1 00:11:57.777 --rc genhtml_legend=1 00:11:57.777 --rc geninfo_all_blocks=1 00:11:57.777 --rc geninfo_unexecuted_blocks=1 00:11:57.777 00:11:57.777 ' 00:11:57.777 12:24:04 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:57.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.777 --rc genhtml_branch_coverage=1 00:11:57.777 --rc genhtml_function_coverage=1 00:11:57.777 --rc genhtml_legend=1 00:11:57.777 --rc geninfo_all_blocks=1 00:11:57.777 --rc geninfo_unexecuted_blocks=1 00:11:57.777 00:11:57.777 ' 00:11:57.777 12:24:04 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:11:57.777 12:24:04 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:11:57.777 12:24:04 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:57.777 12:24:04 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:11:57.777 12:24:04 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:57.777 12:24:04 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:57.777 12:24:04 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:57.777 12:24:04 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:11:57.777 12:24:04 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:11:57.777 12:24:04 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:57.777 12:24:04 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:57.778 12:24:04 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:57.778 12:24:04 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:57.778 12:24:04 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:57.778 12:24:04 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:11:57.778 12:24:04 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:11:57.778 12:24:04 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:11:57.778 12:24:04 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:11:57.778 12:24:04 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:11:57.778 12:24:04 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:11:57.778 12:24:04 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:57.778 12:24:04 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:57.778 12:24:04 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:57.778 12:24:04 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:57.778 12:24:04 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:57.778 12:24:04 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:57.778 12:24:04 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:11:57.778 12:24:04 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:57.778 #define SPDK_CONFIG_H 00:11:57.778 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:57.778 #define SPDK_CONFIG_APPS 1 00:11:57.778 #define SPDK_CONFIG_ARCH native 00:11:57.778 #define SPDK_CONFIG_ASAN 1 00:11:57.778 #undef SPDK_CONFIG_AVAHI 00:11:57.778 #undef SPDK_CONFIG_CET 00:11:57.778 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:57.778 #define SPDK_CONFIG_COVERAGE 1 00:11:57.778 #define SPDK_CONFIG_CROSS_PREFIX 00:11:57.778 #undef SPDK_CONFIG_CRYPTO 00:11:57.778 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:57.778 #undef SPDK_CONFIG_CUSTOMOCF 00:11:57.778 #undef SPDK_CONFIG_DAOS 00:11:57.778 #define SPDK_CONFIG_DAOS_DIR 00:11:57.778 #define SPDK_CONFIG_DEBUG 1 00:11:57.778 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:57.778 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:11:57.778 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:57.778 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:57.778 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:57.778 #undef SPDK_CONFIG_DPDK_UADK 00:11:57.778 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:57.778 #define SPDK_CONFIG_EXAMPLES 1 00:11:57.778 #undef SPDK_CONFIG_FC 00:11:57.778 #define SPDK_CONFIG_FC_PATH 00:11:57.778 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:57.778 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:57.778 #define SPDK_CONFIG_FSDEV 1 00:11:57.778 #undef SPDK_CONFIG_FUSE 00:11:57.778 #undef SPDK_CONFIG_FUZZER 00:11:57.778 #define SPDK_CONFIG_FUZZER_LIB 00:11:57.778 #undef SPDK_CONFIG_GOLANG 00:11:57.778 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:57.778 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:57.778 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:57.778 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:57.778 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:57.778 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:57.778 #undef SPDK_CONFIG_HAVE_LZ4 00:11:57.778 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:57.778 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:57.778 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:57.778 #define SPDK_CONFIG_IDXD 1 00:11:57.778 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:57.778 #undef SPDK_CONFIG_IPSEC_MB 00:11:57.778 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:57.778 #define SPDK_CONFIG_ISAL 1 00:11:57.778 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:57.778 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:57.778 #define SPDK_CONFIG_LIBDIR 00:11:57.778 #undef SPDK_CONFIG_LTO 00:11:57.778 #define SPDK_CONFIG_MAX_LCORES 128 00:11:57.778 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:57.778 #define SPDK_CONFIG_NVME_CUSE 1 00:11:57.778 #undef SPDK_CONFIG_OCF 00:11:57.778 #define SPDK_CONFIG_OCF_PATH 00:11:57.778 #define SPDK_CONFIG_OPENSSL_PATH 00:11:57.778 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:57.778 #define SPDK_CONFIG_PGO_DIR 00:11:57.778 #undef SPDK_CONFIG_PGO_USE 00:11:57.778 #define SPDK_CONFIG_PREFIX /usr/local 00:11:57.778 #undef SPDK_CONFIG_RAID5F 00:11:57.778 #undef SPDK_CONFIG_RBD 00:11:57.778 #define SPDK_CONFIG_RDMA 1 00:11:57.778 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:57.778 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:57.778 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:57.778 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:57.778 #define SPDK_CONFIG_SHARED 1 00:11:57.778 #undef SPDK_CONFIG_SMA 00:11:57.778 #define SPDK_CONFIG_TESTS 1 00:11:57.778 #undef SPDK_CONFIG_TSAN 00:11:57.778 #define SPDK_CONFIG_UBLK 1 00:11:57.778 #define SPDK_CONFIG_UBSAN 1 00:11:57.778 #undef SPDK_CONFIG_UNIT_TESTS 00:11:57.778 #undef SPDK_CONFIG_URING 00:11:57.778 #define SPDK_CONFIG_URING_PATH 00:11:57.778 #undef SPDK_CONFIG_URING_ZNS 00:11:57.778 #undef SPDK_CONFIG_USDT 00:11:57.778 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:57.778 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:57.778 #undef SPDK_CONFIG_VFIO_USER 00:11:57.778 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:57.778 #define SPDK_CONFIG_VHOST 1 00:11:57.778 #define SPDK_CONFIG_VIRTIO 1 00:11:57.778 #undef SPDK_CONFIG_VTUNE 00:11:57.778 #define SPDK_CONFIG_VTUNE_DIR 00:11:57.778 #define SPDK_CONFIG_WERROR 1 00:11:57.778 #define SPDK_CONFIG_WPDK_DIR 00:11:57.778 #define SPDK_CONFIG_XNVME 1 00:11:57.778 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:57.778 12:24:04 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:57.778 12:24:04 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:57.778 12:24:04 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:57.778 12:24:04 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.778 12:24:04 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.778 12:24:04 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.778 12:24:04 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.778 12:24:04 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.778 12:24:04 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.778 12:24:04 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:57.778 12:24:04 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.778 12:24:04 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:57.778 12:24:04 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:57.779 12:24:04 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:57.779 12:24:04 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:57.779 12:24:04 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:11:57.779 12:24:04 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:11:57.779 12:24:04 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:11:57.779 12:24:04 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:11:57.779 12:24:04 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:11:57.779 12:24:04 nvme_xnvme -- pm/common@68 -- # uname -s 00:11:57.779 12:24:04 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:11:57.779 12:24:04 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:57.779 12:24:04 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:57.779 12:24:04 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:57.779 12:24:04 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:57.779 12:24:04 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:57.779 12:24:04 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:57.779 12:24:04 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:11:57.779 12:24:04 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:57.779 12:24:04 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:57.779 12:24:04 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:57.779 12:24:04 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:57.779 12:24:04 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:11:57.779 12:24:04 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:57.779 12:24:04 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70361 ]] 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70361 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.2TFqtc 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.2TFqtc/tests/xnvme /tmp/spdk.2TFqtc 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13971722240 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5596131328 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260629504 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13971722240 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5596131328 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:11:57.780 12:24:04 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265249792 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265397248 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98871058432 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=831721472 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:57.781 * Looking for test storage... 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13971722240 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:57.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:57.781 12:24:04 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:57.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.781 --rc genhtml_branch_coverage=1 00:11:57.781 --rc genhtml_function_coverage=1 00:11:57.781 --rc genhtml_legend=1 00:11:57.781 --rc geninfo_all_blocks=1 00:11:57.781 --rc geninfo_unexecuted_blocks=1 00:11:57.781 00:11:57.781 ' 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:57.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.781 --rc genhtml_branch_coverage=1 00:11:57.781 --rc genhtml_function_coverage=1 00:11:57.781 --rc genhtml_legend=1 00:11:57.781 --rc geninfo_all_blocks=1 00:11:57.781 --rc geninfo_unexecuted_blocks=1 00:11:57.781 00:11:57.781 ' 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:57.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.781 --rc genhtml_branch_coverage=1 00:11:57.781 --rc genhtml_function_coverage=1 00:11:57.781 --rc genhtml_legend=1 00:11:57.781 --rc geninfo_all_blocks=1 00:11:57.781 --rc geninfo_unexecuted_blocks=1 00:11:57.781 00:11:57.781 ' 00:11:57.781 12:24:04 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:57.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.781 --rc genhtml_branch_coverage=1 00:11:57.781 --rc genhtml_function_coverage=1 00:11:57.781 --rc genhtml_legend=1 00:11:57.781 --rc geninfo_all_blocks=1 00:11:57.782 --rc geninfo_unexecuted_blocks=1 00:11:57.782 00:11:57.782 ' 00:11:57.782 12:24:04 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:57.782 12:24:04 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:57.782 12:24:04 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.782 12:24:04 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.782 12:24:04 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.782 12:24:04 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.782 12:24:04 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.782 12:24:04 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.782 12:24:04 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:57.782 12:24:04 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.782 12:24:04 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:11:57.782 12:24:04 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:11:57.782 12:24:04 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:11:57.782 12:24:04 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:11:57.782 12:24:04 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:11:57.782 12:24:04 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:11:57.782 12:24:04 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:11:57.782 12:24:04 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:11:57.782 12:24:04 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:11:57.782 12:24:04 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:11:57.782 12:24:04 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:11:57.782 12:24:04 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:11:57.782 12:24:04 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:11:57.782 12:24:04 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:11:57.782 12:24:04 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:11:57.782 12:24:04 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:11:57.782 12:24:04 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:11:57.782 12:24:04 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:11:57.782 12:24:04 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:11:57.782 12:24:04 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:11:57.782 12:24:04 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:11:57.782 12:24:04 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:58.043 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:58.304 Waiting for block devices as requested 00:11:58.304 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:58.304 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:58.565 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:58.565 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:03.855 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:03.855 12:24:10 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:04.130 12:24:11 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:04.130 12:24:11 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:04.130 12:24:11 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:04.130 12:24:11 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:04.130 12:24:11 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:04.130 12:24:11 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:04.130 12:24:11 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:04.391 No valid GPT data, bailing 00:12:04.391 12:24:11 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:04.391 12:24:11 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:04.391 12:24:11 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:04.391 12:24:11 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:04.391 12:24:11 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:04.391 12:24:11 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:04.391 12:24:11 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:04.391 12:24:11 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:04.391 12:24:11 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:04.391 12:24:11 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:04.391 12:24:11 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:04.391 12:24:11 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:04.391 12:24:11 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:04.391 12:24:11 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:04.391 12:24:11 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:04.391 12:24:11 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:04.391 12:24:11 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:04.391 12:24:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:04.391 12:24:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.391 12:24:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:04.391 ************************************ 00:12:04.391 START TEST xnvme_rpc 00:12:04.391 ************************************ 00:12:04.391 12:24:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:04.391 12:24:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:04.391 12:24:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:04.391 12:24:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:04.391 12:24:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:04.391 12:24:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70756 00:12:04.391 12:24:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70756 00:12:04.391 12:24:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:04.391 12:24:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70756 ']' 00:12:04.391 12:24:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.391 12:24:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.391 12:24:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.391 12:24:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.391 12:24:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.391 [2024-12-16 12:24:11.396878] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:04.391 [2024-12-16 12:24:11.397252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70756 ] 00:12:04.652 [2024-12-16 12:24:11.555266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.652 [2024-12-16 12:24:11.675510] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.597 xnvme_bdev 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70756 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70756 ']' 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70756 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70756 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.597 killing process with pid 70756 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70756' 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70756 00:12:05.597 12:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70756 00:12:07.516 00:12:07.516 real 0m2.889s 00:12:07.516 user 0m2.886s 00:12:07.516 sys 0m0.472s 00:12:07.516 12:24:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.516 12:24:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.516 ************************************ 00:12:07.516 END TEST xnvme_rpc 00:12:07.516 ************************************ 00:12:07.516 12:24:14 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:07.516 12:24:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:07.516 12:24:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.516 12:24:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:07.516 ************************************ 00:12:07.516 START TEST xnvme_bdevperf 00:12:07.516 ************************************ 00:12:07.516 12:24:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:07.516 12:24:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:07.516 12:24:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:07.516 12:24:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:07.516 12:24:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:07.516 12:24:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:07.516 12:24:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:07.516 12:24:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:07.516 { 00:12:07.516 "subsystems": [ 00:12:07.516 { 00:12:07.516 "subsystem": "bdev", 00:12:07.516 "config": [ 00:12:07.516 { 00:12:07.516 "params": { 00:12:07.516 "io_mechanism": "libaio", 00:12:07.516 "conserve_cpu": false, 00:12:07.516 "filename": "/dev/nvme0n1", 00:12:07.516 "name": "xnvme_bdev" 00:12:07.516 }, 00:12:07.516 "method": "bdev_xnvme_create" 00:12:07.516 }, 00:12:07.516 { 00:12:07.516 "method": "bdev_wait_for_examine" 00:12:07.516 } 00:12:07.516 ] 00:12:07.516 } 00:12:07.516 ] 00:12:07.516 } 00:12:07.516 [2024-12-16 12:24:14.346605] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:07.516 [2024-12-16 12:24:14.346748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70824 ] 00:12:07.516 [2024-12-16 12:24:14.509414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.777 [2024-12-16 12:24:14.627511] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.039 Running I/O for 5 seconds... 00:12:09.926 23982.00 IOPS, 93.68 MiB/s [2024-12-16T12:24:17.978Z] 24638.00 IOPS, 96.24 MiB/s [2024-12-16T12:24:19.364Z] 24743.67 IOPS, 96.65 MiB/s [2024-12-16T12:24:19.936Z] 24805.50 IOPS, 96.90 MiB/s [2024-12-16T12:24:20.197Z] 24675.60 IOPS, 96.39 MiB/s 00:12:13.091 Latency(us) 00:12:13.091 [2024-12-16T12:24:20.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:13.091 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:13.091 xnvme_bdev : 5.01 24661.13 96.33 0.00 0.00 2589.53 519.88 6856.07 00:12:13.091 [2024-12-16T12:24:20.197Z] =================================================================================================================== 00:12:13.091 [2024-12-16T12:24:20.197Z] Total : 24661.13 96.33 0.00 0.00 2589.53 519.88 6856.07 00:12:13.663 12:24:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:13.663 12:24:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:13.663 12:24:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:13.663 12:24:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:13.663 12:24:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:13.925 { 00:12:13.925 "subsystems": [ 00:12:13.925 { 00:12:13.925 "subsystem": "bdev", 00:12:13.925 "config": [ 00:12:13.925 { 00:12:13.925 "params": { 00:12:13.925 "io_mechanism": "libaio", 00:12:13.925 "conserve_cpu": false, 00:12:13.925 "filename": "/dev/nvme0n1", 00:12:13.925 "name": "xnvme_bdev" 00:12:13.925 }, 00:12:13.925 "method": "bdev_xnvme_create" 00:12:13.925 }, 00:12:13.925 { 00:12:13.925 "method": "bdev_wait_for_examine" 00:12:13.925 } 00:12:13.925 ] 00:12:13.925 } 00:12:13.925 ] 00:12:13.925 } 00:12:13.925 [2024-12-16 12:24:20.827235] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:13.925 [2024-12-16 12:24:20.828030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70905 ] 00:12:13.925 [2024-12-16 12:24:21.003247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.187 [2024-12-16 12:24:21.123347] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.451 Running I/O for 5 seconds... 00:12:16.374 33100.00 IOPS, 129.30 MiB/s [2024-12-16T12:24:24.864Z] 33462.50 IOPS, 130.71 MiB/s [2024-12-16T12:24:25.805Z] 32490.00 IOPS, 126.91 MiB/s [2024-12-16T12:24:26.747Z] 32237.50 IOPS, 125.93 MiB/s 00:12:19.641 Latency(us) 00:12:19.641 [2024-12-16T12:24:26.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.641 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:19.641 xnvme_bdev : 5.00 32581.89 127.27 0.00 0.00 1959.91 450.56 7108.14 00:12:19.641 [2024-12-16T12:24:26.747Z] =================================================================================================================== 00:12:19.641 [2024-12-16T12:24:26.747Z] Total : 32581.89 127.27 0.00 0.00 1959.91 450.56 7108.14 00:12:20.213 00:12:20.213 real 0m12.983s 00:12:20.213 user 0m4.997s 00:12:20.213 sys 0m6.465s 00:12:20.213 12:24:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.213 12:24:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:20.213 ************************************ 00:12:20.213 END TEST xnvme_bdevperf 00:12:20.213 ************************************ 00:12:20.213 12:24:27 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:20.213 12:24:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:20.213 12:24:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.213 12:24:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:20.473 ************************************ 00:12:20.473 START TEST xnvme_fio_plugin 00:12:20.473 ************************************ 00:12:20.473 12:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:20.473 12:24:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:20.473 12:24:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:20.473 12:24:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:20.473 12:24:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:20.473 12:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:20.473 12:24:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:20.473 12:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:20.473 12:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:20.473 12:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:20.473 12:24:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:20.473 12:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:20.473 12:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:20.473 12:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:20.473 12:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:20.473 12:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:20.473 12:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:20.473 12:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:20.473 12:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:20.473 12:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:20.473 12:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:20.473 12:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:20.473 12:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:20.473 12:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:20.473 { 00:12:20.473 "subsystems": [ 00:12:20.473 { 00:12:20.473 "subsystem": "bdev", 00:12:20.473 "config": [ 00:12:20.473 { 00:12:20.473 "params": { 00:12:20.473 "io_mechanism": "libaio", 00:12:20.473 "conserve_cpu": false, 00:12:20.473 "filename": "/dev/nvme0n1", 00:12:20.473 "name": "xnvme_bdev" 00:12:20.473 }, 00:12:20.473 "method": "bdev_xnvme_create" 00:12:20.473 }, 00:12:20.473 { 00:12:20.473 "method": "bdev_wait_for_examine" 00:12:20.473 } 00:12:20.473 ] 00:12:20.473 } 00:12:20.473 ] 00:12:20.473 } 00:12:20.473 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:20.473 fio-3.35 00:12:20.473 Starting 1 thread 00:12:27.060 00:12:27.060 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71024: Mon Dec 16 12:24:33 2024 00:12:27.060 read: IOPS=30.5k, BW=119MiB/s (125MB/s)(596MiB/5001msec) 00:12:27.060 slat (usec): min=4, max=1867, avg=24.16, stdev=104.47 00:12:27.060 clat (usec): min=111, max=5409, avg=1442.06, stdev=555.29 00:12:27.060 lat (usec): min=199, max=5414, avg=1466.22, stdev=544.72 00:12:27.060 clat percentiles (usec): 00:12:27.060 | 1.00th=[ 297], 5.00th=[ 603], 10.00th=[ 766], 20.00th=[ 988], 00:12:27.060 | 30.00th=[ 1156], 40.00th=[ 1303], 50.00th=[ 1434], 60.00th=[ 1549], 00:12:27.060 | 70.00th=[ 1680], 80.00th=[ 1827], 90.00th=[ 2089], 95.00th=[ 2376], 00:12:27.060 | 99.00th=[ 3163], 99.50th=[ 3490], 99.90th=[ 4047], 99.95th=[ 4293], 00:12:27.060 | 99.99th=[ 4686] 00:12:27.060 bw ( KiB/s): min=112616, max=129328, per=100.00%, avg=122253.33, stdev=5090.48, samples=9 00:12:27.060 iops : min=28154, max=32332, avg=30563.33, stdev=1272.62, samples=9 00:12:27.060 lat (usec) : 250=0.56%, 500=2.46%, 750=6.52%, 1000=11.08% 00:12:27.060 lat (msec) : 2=66.76%, 4=12.51%, 10=0.12% 00:12:27.060 cpu : usr=38.76%, sys=52.44%, ctx=12, majf=0, minf=764 00:12:27.060 IO depths : 1=0.4%, 2=1.1%, 4=2.9%, 8=8.0%, 16=22.9%, 32=62.5%, >=64=2.1% 00:12:27.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.060 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:27.060 issued rwts: total=152680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.060 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.060 00:12:27.060 Run status group 0 (all jobs): 00:12:27.060 READ: bw=119MiB/s (125MB/s), 119MiB/s-119MiB/s (125MB/s-125MB/s), io=596MiB (625MB), run=5001-5001msec 00:12:27.321 ----------------------------------------------------- 00:12:27.321 Suppressions used: 00:12:27.321 count bytes template 00:12:27.321 1 11 /usr/src/fio/parse.c 00:12:27.321 1 8 libtcmalloc_minimal.so 00:12:27.321 1 904 libcrypto.so 00:12:27.321 ----------------------------------------------------- 00:12:27.321 00:12:27.321 12:24:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:27.321 12:24:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:27.321 12:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:27.321 12:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:27.321 12:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:27.321 12:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:27.321 12:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:27.321 12:24:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:27.321 12:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:27.321 12:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:27.321 12:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:27.321 12:24:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:27.321 12:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:27.321 12:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:27.321 12:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:27.321 12:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:27.321 12:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:27.321 12:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:27.321 12:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:27.321 12:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:27.321 12:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:27.321 { 00:12:27.321 "subsystems": [ 00:12:27.321 { 00:12:27.321 "subsystem": "bdev", 00:12:27.321 "config": [ 00:12:27.321 { 00:12:27.321 "params": { 00:12:27.321 "io_mechanism": "libaio", 00:12:27.321 "conserve_cpu": false, 00:12:27.321 "filename": "/dev/nvme0n1", 00:12:27.321 "name": "xnvme_bdev" 00:12:27.321 }, 00:12:27.321 "method": "bdev_xnvme_create" 00:12:27.321 }, 00:12:27.321 { 00:12:27.321 "method": "bdev_wait_for_examine" 00:12:27.321 } 00:12:27.321 ] 00:12:27.321 } 00:12:27.321 ] 00:12:27.321 } 00:12:27.582 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:27.582 fio-3.35 00:12:27.582 Starting 1 thread 00:12:34.178 00:12:34.178 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71117: Mon Dec 16 12:24:40 2024 00:12:34.178 write: IOPS=33.4k, BW=131MiB/s (137MB/s)(653MiB/5001msec); 0 zone resets 00:12:34.178 slat (usec): min=4, max=2097, avg=22.62, stdev=88.26 00:12:34.178 clat (usec): min=107, max=5642, avg=1293.92, stdev=557.12 00:12:34.178 lat (usec): min=190, max=5649, avg=1316.54, stdev=550.82 00:12:34.178 clat percentiles (usec): 00:12:34.178 | 1.00th=[ 277], 5.00th=[ 490], 10.00th=[ 644], 20.00th=[ 832], 00:12:34.178 | 30.00th=[ 988], 40.00th=[ 1123], 50.00th=[ 1254], 60.00th=[ 1369], 00:12:34.178 | 70.00th=[ 1532], 80.00th=[ 1696], 90.00th=[ 1942], 95.00th=[ 2245], 00:12:34.178 | 99.00th=[ 3064], 99.50th=[ 3392], 99.90th=[ 4228], 99.95th=[ 4555], 00:12:34.178 | 99.99th=[ 5080] 00:12:34.178 bw ( KiB/s): min=116384, max=143184, per=100.00%, avg=134150.22, stdev=8492.89, samples=9 00:12:34.178 iops : min=29096, max=35796, avg=33537.56, stdev=2123.22, samples=9 00:12:34.178 lat (usec) : 250=0.69%, 500=4.58%, 750=9.99%, 1000=15.59% 00:12:34.178 lat (msec) : 2=60.35%, 4=8.65%, 10=0.15% 00:12:34.178 cpu : usr=37.64%, sys=51.74%, ctx=11, majf=0, minf=765 00:12:34.178 IO depths : 1=0.4%, 2=1.0%, 4=2.8%, 8=8.0%, 16=22.6%, 32=63.2%, >=64=2.1% 00:12:34.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.178 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:12:34.178 issued rwts: total=0,167238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.178 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:34.178 00:12:34.178 Run status group 0 (all jobs): 00:12:34.178 WRITE: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=653MiB (685MB), run=5001-5001msec 00:12:34.178 ----------------------------------------------------- 00:12:34.178 Suppressions used: 00:12:34.178 count bytes template 00:12:34.178 1 11 /usr/src/fio/parse.c 00:12:34.178 1 8 libtcmalloc_minimal.so 00:12:34.178 1 904 libcrypto.so 00:12:34.178 ----------------------------------------------------- 00:12:34.178 00:12:34.178 ************************************ 00:12:34.178 END TEST xnvme_fio_plugin 00:12:34.178 ************************************ 00:12:34.178 00:12:34.178 real 0m13.836s 00:12:34.178 user 0m6.627s 00:12:34.178 sys 0m5.853s 00:12:34.178 12:24:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.178 12:24:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:34.178 12:24:41 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:34.178 12:24:41 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:34.178 12:24:41 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:34.178 12:24:41 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:34.178 12:24:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:34.178 12:24:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:34.178 12:24:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:34.178 ************************************ 00:12:34.178 START TEST xnvme_rpc 00:12:34.178 ************************************ 00:12:34.178 12:24:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:34.178 12:24:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:34.178 12:24:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:34.178 12:24:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:34.178 12:24:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:34.178 12:24:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71203 00:12:34.178 12:24:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71203 00:12:34.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.178 12:24:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71203 ']' 00:12:34.178 12:24:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.178 12:24:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:34.178 12:24:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.178 12:24:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:34.178 12:24:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.178 12:24:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:34.439 [2024-12-16 12:24:41.323429] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:34.439 [2024-12-16 12:24:41.323586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71203 ] 00:12:34.439 [2024-12-16 12:24:41.485519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.700 [2024-12-16 12:24:41.605355] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.273 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:35.273 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:35.273 12:24:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:35.273 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.273 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.273 xnvme_bdev 00:12:35.273 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.273 12:24:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:35.273 12:24:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:35.273 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.273 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.273 12:24:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:35.273 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.273 12:24:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:35.273 12:24:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:35.273 12:24:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:35.273 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.273 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.273 12:24:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71203 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71203 ']' 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71203 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71203 00:12:35.534 killing process with pid 71203 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71203' 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71203 00:12:35.534 12:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71203 00:12:37.449 ************************************ 00:12:37.449 END TEST xnvme_rpc 00:12:37.449 ************************************ 00:12:37.449 00:12:37.449 real 0m2.917s 00:12:37.449 user 0m2.896s 00:12:37.449 sys 0m0.486s 00:12:37.449 12:24:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.449 12:24:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.449 12:24:44 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:37.449 12:24:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:37.449 12:24:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.449 12:24:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:37.449 ************************************ 00:12:37.449 START TEST xnvme_bdevperf 00:12:37.449 ************************************ 00:12:37.449 12:24:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:37.449 12:24:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:37.449 12:24:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:37.449 12:24:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:37.449 12:24:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:37.449 12:24:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:37.449 12:24:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:37.449 12:24:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:37.449 { 00:12:37.449 "subsystems": [ 00:12:37.449 { 00:12:37.449 "subsystem": "bdev", 00:12:37.449 "config": [ 00:12:37.449 { 00:12:37.449 "params": { 00:12:37.449 "io_mechanism": "libaio", 00:12:37.449 "conserve_cpu": true, 00:12:37.449 "filename": "/dev/nvme0n1", 00:12:37.449 "name": "xnvme_bdev" 00:12:37.449 }, 00:12:37.449 "method": "bdev_xnvme_create" 00:12:37.449 }, 00:12:37.449 { 00:12:37.449 "method": "bdev_wait_for_examine" 00:12:37.449 } 00:12:37.449 ] 00:12:37.449 } 00:12:37.449 ] 00:12:37.449 } 00:12:37.449 [2024-12-16 12:24:44.293110] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:37.449 [2024-12-16 12:24:44.293290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71272 ] 00:12:37.449 [2024-12-16 12:24:44.459763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.710 [2024-12-16 12:24:44.578418] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.971 Running I/O for 5 seconds... 00:12:39.854 31325.00 IOPS, 122.36 MiB/s [2024-12-16T12:24:47.903Z] 30185.50 IOPS, 117.91 MiB/s [2024-12-16T12:24:49.292Z] 30391.00 IOPS, 118.71 MiB/s [2024-12-16T12:24:50.237Z] 29972.75 IOPS, 117.08 MiB/s 00:12:43.131 Latency(us) 00:12:43.131 [2024-12-16T12:24:50.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.131 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:43.131 xnvme_bdev : 5.00 29666.37 115.88 0.00 0.00 2152.57 466.31 7864.32 00:12:43.131 [2024-12-16T12:24:50.237Z] =================================================================================================================== 00:12:43.131 [2024-12-16T12:24:50.237Z] Total : 29666.37 115.88 0.00 0.00 2152.57 466.31 7864.32 00:12:43.705 12:24:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:43.705 12:24:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:43.705 12:24:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:43.705 12:24:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:43.705 12:24:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:43.705 { 00:12:43.705 "subsystems": [ 00:12:43.705 { 00:12:43.705 "subsystem": "bdev", 00:12:43.705 "config": [ 00:12:43.705 { 00:12:43.705 "params": { 00:12:43.705 "io_mechanism": "libaio", 00:12:43.705 "conserve_cpu": true, 00:12:43.705 "filename": "/dev/nvme0n1", 00:12:43.705 "name": "xnvme_bdev" 00:12:43.705 }, 00:12:43.705 "method": "bdev_xnvme_create" 00:12:43.705 }, 00:12:43.705 { 00:12:43.705 "method": "bdev_wait_for_examine" 00:12:43.705 } 00:12:43.705 ] 00:12:43.705 } 00:12:43.705 ] 00:12:43.705 } 00:12:43.705 [2024-12-16 12:24:50.768957] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:43.705 [2024-12-16 12:24:50.769127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71347 ] 00:12:43.966 [2024-12-16 12:24:50.935507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.966 [2024-12-16 12:24:51.057996] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.571 Running I/O for 5 seconds... 00:12:46.482 31539.00 IOPS, 123.20 MiB/s [2024-12-16T12:24:54.532Z] 30579.00 IOPS, 119.45 MiB/s [2024-12-16T12:24:55.472Z] 29882.00 IOPS, 116.73 MiB/s [2024-12-16T12:24:56.414Z] 30317.25 IOPS, 118.43 MiB/s 00:12:49.308 Latency(us) 00:12:49.308 [2024-12-16T12:24:56.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.308 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:49.308 xnvme_bdev : 5.00 31220.31 121.95 0.00 0.00 2045.28 222.13 9074.22 00:12:49.308 [2024-12-16T12:24:56.414Z] =================================================================================================================== 00:12:49.308 [2024-12-16T12:24:56.414Z] Total : 31220.31 121.95 0.00 0.00 2045.28 222.13 9074.22 00:12:50.251 00:12:50.251 real 0m12.964s 00:12:50.251 user 0m4.889s 00:12:50.251 sys 0m6.499s 00:12:50.251 12:24:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.251 12:24:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:50.251 ************************************ 00:12:50.251 END TEST xnvme_bdevperf 00:12:50.251 ************************************ 00:12:50.251 12:24:57 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:50.251 12:24:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:50.251 12:24:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.251 12:24:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.251 ************************************ 00:12:50.251 START TEST xnvme_fio_plugin 00:12:50.251 ************************************ 00:12:50.251 12:24:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:50.251 12:24:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:50.251 12:24:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:50.251 12:24:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:50.251 12:24:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:50.251 12:24:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:50.251 12:24:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:50.251 12:24:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:50.251 12:24:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:50.251 12:24:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:50.251 12:24:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:50.251 12:24:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:50.251 12:24:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:50.251 12:24:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:50.251 12:24:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:50.251 12:24:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:50.251 12:24:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:50.251 12:24:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:50.251 12:24:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:50.251 12:24:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:50.251 12:24:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:50.251 12:24:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:50.251 12:24:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:50.251 12:24:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:50.251 { 00:12:50.251 "subsystems": [ 00:12:50.251 { 00:12:50.251 "subsystem": "bdev", 00:12:50.251 "config": [ 00:12:50.251 { 00:12:50.251 "params": { 00:12:50.251 "io_mechanism": "libaio", 00:12:50.251 "conserve_cpu": true, 00:12:50.251 "filename": "/dev/nvme0n1", 00:12:50.251 "name": "xnvme_bdev" 00:12:50.251 }, 00:12:50.251 "method": "bdev_xnvme_create" 00:12:50.251 }, 00:12:50.251 { 00:12:50.251 "method": "bdev_wait_for_examine" 00:12:50.251 } 00:12:50.251 ] 00:12:50.251 } 00:12:50.251 ] 00:12:50.251 } 00:12:50.512 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:50.512 fio-3.35 00:12:50.512 Starting 1 thread 00:12:57.103 00:12:57.103 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71467: Mon Dec 16 12:25:03 2024 00:12:57.103 read: IOPS=30.9k, BW=121MiB/s (127MB/s)(604MiB/5001msec) 00:12:57.103 slat (usec): min=4, max=2026, avg=22.44, stdev=101.55 00:12:57.103 clat (usec): min=120, max=4896, avg=1456.63, stdev=534.77 00:12:57.103 lat (usec): min=205, max=4944, avg=1479.07, stdev=524.08 00:12:57.103 clat percentiles (usec): 00:12:57.103 | 1.00th=[ 306], 5.00th=[ 619], 10.00th=[ 791], 20.00th=[ 1020], 00:12:57.103 | 30.00th=[ 1172], 40.00th=[ 1319], 50.00th=[ 1450], 60.00th=[ 1582], 00:12:57.103 | 70.00th=[ 1696], 80.00th=[ 1844], 90.00th=[ 2073], 95.00th=[ 2343], 00:12:57.103 | 99.00th=[ 3032], 99.50th=[ 3359], 99.90th=[ 3982], 99.95th=[ 4228], 00:12:57.103 | 99.99th=[ 4686] 00:12:57.103 bw ( KiB/s): min=105069, max=132568, per=99.04%, avg=122394.33, stdev=10141.40, samples=9 00:12:57.103 iops : min=26267, max=33142, avg=30598.56, stdev=2535.40, samples=9 00:12:57.103 lat (usec) : 250=0.52%, 500=2.33%, 750=5.73%, 1000=10.36% 00:12:57.103 lat (msec) : 2=68.43%, 4=12.55%, 10=0.09% 00:12:57.103 cpu : usr=42.89%, sys=49.19%, ctx=19, majf=0, minf=764 00:12:57.103 IO depths : 1=0.6%, 2=1.3%, 4=3.2%, 8=8.4%, 16=23.1%, 32=61.4%, >=64=2.1% 00:12:57.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.103 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:57.103 issued rwts: total=154509,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.103 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.103 00:12:57.103 Run status group 0 (all jobs): 00:12:57.103 READ: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=604MiB (633MB), run=5001-5001msec 00:12:57.103 ----------------------------------------------------- 00:12:57.103 Suppressions used: 00:12:57.103 count bytes template 00:12:57.103 1 11 /usr/src/fio/parse.c 00:12:57.103 1 8 libtcmalloc_minimal.so 00:12:57.103 1 904 libcrypto.so 00:12:57.103 ----------------------------------------------------- 00:12:57.103 00:12:57.364 12:25:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:57.364 12:25:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:57.364 12:25:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:57.364 12:25:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:57.364 12:25:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:57.364 12:25:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:57.364 12:25:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:57.364 12:25:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:57.364 12:25:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:57.364 12:25:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:57.364 12:25:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:57.364 12:25:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:57.364 12:25:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:57.364 12:25:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:57.364 12:25:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:57.364 12:25:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:57.364 12:25:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:57.364 12:25:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:57.364 12:25:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:57.364 12:25:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:57.364 12:25:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:57.364 { 00:12:57.364 "subsystems": [ 00:12:57.364 { 00:12:57.364 "subsystem": "bdev", 00:12:57.364 "config": [ 00:12:57.364 { 00:12:57.364 "params": { 00:12:57.364 "io_mechanism": "libaio", 00:12:57.364 "conserve_cpu": true, 00:12:57.364 "filename": "/dev/nvme0n1", 00:12:57.364 "name": "xnvme_bdev" 00:12:57.364 }, 00:12:57.364 "method": "bdev_xnvme_create" 00:12:57.364 }, 00:12:57.364 { 00:12:57.364 "method": "bdev_wait_for_examine" 00:12:57.364 } 00:12:57.364 ] 00:12:57.364 } 00:12:57.364 ] 00:12:57.364 } 00:12:57.364 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:57.364 fio-3.35 00:12:57.364 Starting 1 thread 00:13:03.952 00:13:03.952 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71560: Mon Dec 16 12:25:10 2024 00:13:03.952 write: IOPS=31.3k, BW=122MiB/s (128MB/s)(611MiB/5001msec); 0 zone resets 00:13:03.952 slat (usec): min=4, max=1896, avg=22.55, stdev=97.41 00:13:03.952 clat (usec): min=108, max=7182, avg=1426.47, stdev=557.62 00:13:03.952 lat (usec): min=193, max=7187, avg=1449.03, stdev=548.35 00:13:03.952 clat percentiles (usec): 00:13:03.952 | 1.00th=[ 293], 5.00th=[ 570], 10.00th=[ 742], 20.00th=[ 963], 00:13:03.952 | 30.00th=[ 1123], 40.00th=[ 1270], 50.00th=[ 1401], 60.00th=[ 1532], 00:13:03.952 | 70.00th=[ 1680], 80.00th=[ 1844], 90.00th=[ 2114], 95.00th=[ 2343], 00:13:03.952 | 99.00th=[ 2999], 99.50th=[ 3326], 99.90th=[ 4015], 99.95th=[ 4293], 00:13:03.952 | 99.99th=[ 5145] 00:13:03.952 bw ( KiB/s): min=115432, max=137616, per=99.60%, avg=124573.33, stdev=6948.04, samples=9 00:13:03.952 iops : min=28858, max=34404, avg=31143.33, stdev=1737.01, samples=9 00:13:03.952 lat (usec) : 250=0.57%, 500=3.14%, 750=6.61%, 1000=11.81% 00:13:03.952 lat (msec) : 2=64.40%, 4=13.38%, 10=0.10% 00:13:03.952 cpu : usr=42.52%, sys=49.02%, ctx=13, majf=0, minf=765 00:13:03.952 IO depths : 1=0.6%, 2=1.3%, 4=3.2%, 8=8.5%, 16=22.9%, 32=61.5%, >=64=2.1% 00:13:03.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:03.952 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:03.952 issued rwts: total=0,156368,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:03.952 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:03.952 00:13:03.952 Run status group 0 (all jobs): 00:13:03.952 WRITE: bw=122MiB/s (128MB/s), 122MiB/s-122MiB/s (128MB/s-128MB/s), io=611MiB (640MB), run=5001-5001msec 00:13:04.214 ----------------------------------------------------- 00:13:04.214 Suppressions used: 00:13:04.214 count bytes template 00:13:04.214 1 11 /usr/src/fio/parse.c 00:13:04.214 1 8 libtcmalloc_minimal.so 00:13:04.214 1 904 libcrypto.so 00:13:04.214 ----------------------------------------------------- 00:13:04.214 00:13:04.214 00:13:04.214 real 0m13.945s 00:13:04.214 user 0m7.162s 00:13:04.214 sys 0m5.570s 00:13:04.214 12:25:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:04.214 ************************************ 00:13:04.214 END TEST xnvme_fio_plugin 00:13:04.214 ************************************ 00:13:04.214 12:25:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:04.214 12:25:11 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:04.214 12:25:11 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:04.214 12:25:11 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:04.214 12:25:11 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:04.214 12:25:11 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:04.214 12:25:11 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:04.214 12:25:11 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:04.214 12:25:11 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:04.214 12:25:11 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:04.214 12:25:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:04.214 12:25:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:04.214 12:25:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.214 ************************************ 00:13:04.214 START TEST xnvme_rpc 00:13:04.214 ************************************ 00:13:04.214 12:25:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:04.214 12:25:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:04.214 12:25:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:04.214 12:25:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:04.214 12:25:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:04.214 12:25:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71646 00:13:04.214 12:25:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71646 00:13:04.214 12:25:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71646 ']' 00:13:04.214 12:25:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.214 12:25:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:04.214 12:25:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.214 12:25:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:04.214 12:25:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:04.214 12:25:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.475 [2024-12-16 12:25:11.376384] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:04.475 [2024-12-16 12:25:11.376525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71646 ] 00:13:04.475 [2024-12-16 12:25:11.541318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.736 [2024-12-16 12:25:11.660278] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.309 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:05.309 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:05.309 12:25:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:05.309 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.309 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.309 xnvme_bdev 00:13:05.309 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.309 12:25:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:05.309 12:25:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:05.309 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.309 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.309 12:25:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:05.309 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.309 12:25:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:05.309 12:25:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:05.309 12:25:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:05.309 12:25:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:05.309 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.309 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71646 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71646 ']' 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71646 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71646 00:13:05.571 killing process with pid 71646 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71646' 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71646 00:13:05.571 12:25:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71646 00:13:07.488 00:13:07.488 real 0m2.897s 00:13:07.488 user 0m2.852s 00:13:07.488 sys 0m0.521s 00:13:07.488 12:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.488 ************************************ 00:13:07.488 END TEST xnvme_rpc 00:13:07.488 ************************************ 00:13:07.488 12:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.488 12:25:14 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:07.488 12:25:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:07.488 12:25:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.488 12:25:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.488 ************************************ 00:13:07.488 START TEST xnvme_bdevperf 00:13:07.488 ************************************ 00:13:07.488 12:25:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:07.488 12:25:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:07.488 12:25:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:07.488 12:25:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:07.488 12:25:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:07.488 12:25:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:07.488 12:25:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:07.488 12:25:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:07.488 { 00:13:07.488 "subsystems": [ 00:13:07.488 { 00:13:07.488 "subsystem": "bdev", 00:13:07.488 "config": [ 00:13:07.488 { 00:13:07.488 "params": { 00:13:07.488 "io_mechanism": "io_uring", 00:13:07.488 "conserve_cpu": false, 00:13:07.488 "filename": "/dev/nvme0n1", 00:13:07.488 "name": "xnvme_bdev" 00:13:07.488 }, 00:13:07.488 "method": "bdev_xnvme_create" 00:13:07.488 }, 00:13:07.488 { 00:13:07.488 "method": "bdev_wait_for_examine" 00:13:07.488 } 00:13:07.488 ] 00:13:07.488 } 00:13:07.488 ] 00:13:07.488 } 00:13:07.488 [2024-12-16 12:25:14.337680] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:07.488 [2024-12-16 12:25:14.337820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71720 ] 00:13:07.488 [2024-12-16 12:25:14.504011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.749 [2024-12-16 12:25:14.622783] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.010 Running I/O for 5 seconds... 00:13:09.896 34020.00 IOPS, 132.89 MiB/s [2024-12-16T12:25:17.945Z] 33543.00 IOPS, 131.03 MiB/s [2024-12-16T12:25:19.332Z] 33067.00 IOPS, 129.17 MiB/s [2024-12-16T12:25:20.275Z] 32954.75 IOPS, 128.73 MiB/s [2024-12-16T12:25:20.275Z] 32926.00 IOPS, 128.62 MiB/s 00:13:13.169 Latency(us) 00:13:13.169 [2024-12-16T12:25:20.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.169 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:13.169 xnvme_bdev : 5.00 32910.50 128.56 0.00 0.00 1940.93 655.36 4360.66 00:13:13.169 [2024-12-16T12:25:20.275Z] =================================================================================================================== 00:13:13.169 [2024-12-16T12:25:20.275Z] Total : 32910.50 128.56 0.00 0.00 1940.93 655.36 4360.66 00:13:13.753 12:25:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:13.753 12:25:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:13.753 12:25:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:13.753 12:25:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:13.753 12:25:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:13.753 { 00:13:13.753 "subsystems": [ 00:13:13.753 { 00:13:13.753 "subsystem": "bdev", 00:13:13.753 "config": [ 00:13:13.753 { 00:13:13.753 "params": { 00:13:13.753 "io_mechanism": "io_uring", 00:13:13.753 "conserve_cpu": false, 00:13:13.753 "filename": "/dev/nvme0n1", 00:13:13.753 "name": "xnvme_bdev" 00:13:13.753 }, 00:13:13.753 "method": "bdev_xnvme_create" 00:13:13.753 }, 00:13:13.753 { 00:13:13.753 "method": "bdev_wait_for_examine" 00:13:13.753 } 00:13:13.753 ] 00:13:13.753 } 00:13:13.753 ] 00:13:13.753 } 00:13:13.753 [2024-12-16 12:25:20.803077] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:13.753 [2024-12-16 12:25:20.803462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71799 ] 00:13:14.074 [2024-12-16 12:25:20.972267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.074 [2024-12-16 12:25:21.090598] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.335 Running I/O for 5 seconds... 00:13:16.664 33632.00 IOPS, 131.38 MiB/s [2024-12-16T12:25:24.712Z] 33839.00 IOPS, 132.18 MiB/s [2024-12-16T12:25:25.653Z] 33972.33 IOPS, 132.70 MiB/s [2024-12-16T12:25:26.594Z] 34035.25 IOPS, 132.95 MiB/s 00:13:19.488 Latency(us) 00:13:19.488 [2024-12-16T12:25:26.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.488 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:19.488 xnvme_bdev : 5.00 34125.28 133.30 0.00 0.00 1871.44 373.37 7864.32 00:13:19.488 [2024-12-16T12:25:26.594Z] =================================================================================================================== 00:13:19.488 [2024-12-16T12:25:26.594Z] Total : 34125.28 133.30 0.00 0.00 1871.44 373.37 7864.32 00:13:20.060 00:13:20.060 real 0m12.901s 00:13:20.060 user 0m6.271s 00:13:20.060 sys 0m6.355s 00:13:20.060 12:25:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.060 ************************************ 00:13:20.060 END TEST xnvme_bdevperf 00:13:20.060 ************************************ 00:13:20.060 12:25:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:20.322 12:25:27 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:20.322 12:25:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:20.322 12:25:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.322 12:25:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.322 ************************************ 00:13:20.322 START TEST xnvme_fio_plugin 00:13:20.322 ************************************ 00:13:20.322 12:25:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:20.322 12:25:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:20.322 12:25:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:20.322 12:25:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:20.322 12:25:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:20.322 12:25:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:20.323 12:25:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:20.323 12:25:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:20.323 12:25:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:20.323 12:25:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:20.323 12:25:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:20.323 12:25:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:20.323 12:25:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:20.323 12:25:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:20.323 12:25:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:20.323 12:25:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:20.323 12:25:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:20.323 12:25:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:20.323 12:25:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:20.323 12:25:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:20.323 12:25:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:20.323 12:25:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:20.323 12:25:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:20.323 12:25:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:20.323 { 00:13:20.323 "subsystems": [ 00:13:20.323 { 00:13:20.323 "subsystem": "bdev", 00:13:20.323 "config": [ 00:13:20.323 { 00:13:20.323 "params": { 00:13:20.323 "io_mechanism": "io_uring", 00:13:20.323 "conserve_cpu": false, 00:13:20.323 "filename": "/dev/nvme0n1", 00:13:20.323 "name": "xnvme_bdev" 00:13:20.323 }, 00:13:20.323 "method": "bdev_xnvme_create" 00:13:20.323 }, 00:13:20.323 { 00:13:20.323 "method": "bdev_wait_for_examine" 00:13:20.323 } 00:13:20.323 ] 00:13:20.323 } 00:13:20.323 ] 00:13:20.323 } 00:13:20.584 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:20.584 fio-3.35 00:13:20.584 Starting 1 thread 00:13:27.175 00:13:27.175 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71913: Mon Dec 16 12:25:33 2024 00:13:27.175 read: IOPS=32.5k, BW=127MiB/s (133MB/s)(635MiB/5001msec) 00:13:27.175 slat (nsec): min=2874, max=56636, avg=3402.97, stdev=1701.41 00:13:27.175 clat (usec): min=1090, max=3529, avg=1831.90, stdev=285.91 00:13:27.175 lat (usec): min=1093, max=3565, avg=1835.30, stdev=286.15 00:13:27.175 clat percentiles (usec): 00:13:27.175 | 1.00th=[ 1303], 5.00th=[ 1418], 10.00th=[ 1483], 20.00th=[ 1582], 00:13:27.175 | 30.00th=[ 1663], 40.00th=[ 1729], 50.00th=[ 1811], 60.00th=[ 1876], 00:13:27.175 | 70.00th=[ 1958], 80.00th=[ 2057], 90.00th=[ 2212], 95.00th=[ 2343], 00:13:27.175 | 99.00th=[ 2638], 99.50th=[ 2769], 99.90th=[ 2999], 99.95th=[ 3130], 00:13:27.175 | 99.99th=[ 3359] 00:13:27.175 bw ( KiB/s): min=126464, max=133632, per=100.00%, avg=130161.78, stdev=2569.94, samples=9 00:13:27.175 iops : min=31616, max=33408, avg=32540.44, stdev=642.48, samples=9 00:13:27.175 lat (msec) : 2=75.04%, 4=24.96% 00:13:27.175 cpu : usr=32.22%, sys=66.70%, ctx=12, majf=0, minf=762 00:13:27.175 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:27.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.175 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:27.175 issued rwts: total=162432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.175 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:27.175 00:13:27.175 Run status group 0 (all jobs): 00:13:27.175 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=635MiB (665MB), run=5001-5001msec 00:13:27.175 ----------------------------------------------------- 00:13:27.175 Suppressions used: 00:13:27.175 count bytes template 00:13:27.175 1 11 /usr/src/fio/parse.c 00:13:27.175 1 8 libtcmalloc_minimal.so 00:13:27.175 1 904 libcrypto.so 00:13:27.175 ----------------------------------------------------- 00:13:27.175 00:13:27.175 12:25:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:27.175 12:25:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:27.176 12:25:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:27.176 12:25:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:27.176 12:25:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:27.176 12:25:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:27.176 12:25:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:27.176 12:25:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:27.176 12:25:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:27.176 12:25:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:27.176 12:25:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:27.176 12:25:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:27.176 12:25:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:27.176 12:25:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:27.176 12:25:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:27.176 12:25:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:27.176 12:25:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:27.176 12:25:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:27.176 12:25:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:27.176 12:25:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:27.176 12:25:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:27.176 { 00:13:27.176 "subsystems": [ 00:13:27.176 { 00:13:27.176 "subsystem": "bdev", 00:13:27.176 "config": [ 00:13:27.176 { 00:13:27.176 "params": { 00:13:27.176 "io_mechanism": "io_uring", 00:13:27.176 "conserve_cpu": false, 00:13:27.176 "filename": "/dev/nvme0n1", 00:13:27.176 "name": "xnvme_bdev" 00:13:27.176 }, 00:13:27.176 "method": "bdev_xnvme_create" 00:13:27.176 }, 00:13:27.176 { 00:13:27.176 "method": "bdev_wait_for_examine" 00:13:27.176 } 00:13:27.176 ] 00:13:27.176 } 00:13:27.176 ] 00:13:27.176 } 00:13:27.437 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:27.437 fio-3.35 00:13:27.437 Starting 1 thread 00:13:34.024 00:13:34.024 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72010: Mon Dec 16 12:25:40 2024 00:13:34.024 write: IOPS=33.5k, BW=131MiB/s (137MB/s)(655MiB/5002msec); 0 zone resets 00:13:34.024 slat (usec): min=2, max=148, avg= 3.66, stdev= 1.79 00:13:34.024 clat (usec): min=457, max=8142, avg=1762.76, stdev=288.02 00:13:34.024 lat (usec): min=460, max=8147, avg=1766.42, stdev=288.20 00:13:34.024 clat percentiles (usec): 00:13:34.024 | 1.00th=[ 1254], 5.00th=[ 1385], 10.00th=[ 1450], 20.00th=[ 1532], 00:13:34.024 | 30.00th=[ 1598], 40.00th=[ 1663], 50.00th=[ 1729], 60.00th=[ 1795], 00:13:34.024 | 70.00th=[ 1876], 80.00th=[ 1975], 90.00th=[ 2114], 95.00th=[ 2245], 00:13:34.024 | 99.00th=[ 2573], 99.50th=[ 2704], 99.90th=[ 2966], 99.95th=[ 3261], 00:13:34.024 | 99.99th=[ 7963] 00:13:34.024 bw ( KiB/s): min=129408, max=140288, per=99.51%, avg=133457.56, stdev=3420.13, samples=9 00:13:34.024 iops : min=32352, max=35072, avg=33364.33, stdev=855.04, samples=9 00:13:34.024 lat (usec) : 500=0.01%, 750=0.01% 00:13:34.024 lat (msec) : 2=82.86%, 4=17.09%, 10=0.04% 00:13:34.024 cpu : usr=33.37%, sys=65.55%, ctx=10, majf=0, minf=763 00:13:34.024 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:13:34.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.024 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:34.024 issued rwts: total=0,167709,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.024 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:34.024 00:13:34.024 Run status group 0 (all jobs): 00:13:34.024 WRITE: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=655MiB (687MB), run=5002-5002msec 00:13:34.024 ----------------------------------------------------- 00:13:34.024 Suppressions used: 00:13:34.024 count bytes template 00:13:34.024 1 11 /usr/src/fio/parse.c 00:13:34.024 1 8 libtcmalloc_minimal.so 00:13:34.024 1 904 libcrypto.so 00:13:34.024 ----------------------------------------------------- 00:13:34.024 00:13:34.285 ************************************ 00:13:34.285 END TEST xnvme_fio_plugin 00:13:34.285 ************************************ 00:13:34.285 00:13:34.285 real 0m13.901s 00:13:34.285 user 0m6.253s 00:13:34.285 sys 0m7.219s 00:13:34.285 12:25:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.285 12:25:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:34.285 12:25:41 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:34.285 12:25:41 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:34.285 12:25:41 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:34.285 12:25:41 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:34.285 12:25:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:34.285 12:25:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.285 12:25:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:34.285 ************************************ 00:13:34.285 START TEST xnvme_rpc 00:13:34.285 ************************************ 00:13:34.285 12:25:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:34.285 12:25:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:34.285 12:25:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:34.285 12:25:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:34.285 12:25:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:34.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.285 12:25:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72091 00:13:34.285 12:25:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72091 00:13:34.285 12:25:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72091 ']' 00:13:34.285 12:25:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.285 12:25:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:34.285 12:25:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.285 12:25:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:34.285 12:25:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.285 12:25:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:34.285 [2024-12-16 12:25:41.288249] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:34.285 [2024-12-16 12:25:41.288397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72091 ] 00:13:34.545 [2024-12-16 12:25:41.455305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.545 [2024-12-16 12:25:41.583330] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.487 xnvme_bdev 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72091 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72091 ']' 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72091 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72091 00:13:35.487 killing process with pid 72091 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72091' 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72091 00:13:35.487 12:25:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72091 00:13:37.400 ************************************ 00:13:37.400 END TEST xnvme_rpc 00:13:37.400 ************************************ 00:13:37.400 00:13:37.400 real 0m2.939s 00:13:37.400 user 0m2.949s 00:13:37.400 sys 0m0.476s 00:13:37.400 12:25:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.400 12:25:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.400 12:25:44 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:37.400 12:25:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:37.400 12:25:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.400 12:25:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:37.400 ************************************ 00:13:37.400 START TEST xnvme_bdevperf 00:13:37.400 ************************************ 00:13:37.400 12:25:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:37.400 12:25:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:37.400 12:25:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:37.400 12:25:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:37.401 12:25:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:37.401 12:25:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:37.401 12:25:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:37.401 12:25:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:37.401 { 00:13:37.401 "subsystems": [ 00:13:37.401 { 00:13:37.401 "subsystem": "bdev", 00:13:37.401 "config": [ 00:13:37.401 { 00:13:37.401 "params": { 00:13:37.401 "io_mechanism": "io_uring", 00:13:37.401 "conserve_cpu": true, 00:13:37.401 "filename": "/dev/nvme0n1", 00:13:37.401 "name": "xnvme_bdev" 00:13:37.401 }, 00:13:37.401 "method": "bdev_xnvme_create" 00:13:37.401 }, 00:13:37.401 { 00:13:37.401 "method": "bdev_wait_for_examine" 00:13:37.401 } 00:13:37.401 ] 00:13:37.401 } 00:13:37.401 ] 00:13:37.401 } 00:13:37.401 [2024-12-16 12:25:44.299254] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:37.401 [2024-12-16 12:25:44.299577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72165 ] 00:13:37.401 [2024-12-16 12:25:44.464604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.661 [2024-12-16 12:25:44.583839] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.922 Running I/O for 5 seconds... 00:13:39.805 33233.00 IOPS, 129.82 MiB/s [2024-12-16T12:25:48.296Z] 33250.00 IOPS, 129.88 MiB/s [2024-12-16T12:25:49.241Z] 33233.00 IOPS, 129.82 MiB/s [2024-12-16T12:25:50.185Z] 33113.25 IOPS, 129.35 MiB/s [2024-12-16T12:25:50.185Z] 33067.60 IOPS, 129.17 MiB/s 00:13:43.079 Latency(us) 00:13:43.079 [2024-12-16T12:25:50.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.079 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:43.079 xnvme_bdev : 5.01 33043.05 129.07 0.00 0.00 1932.81 1071.26 6856.07 00:13:43.079 [2024-12-16T12:25:50.185Z] =================================================================================================================== 00:13:43.079 [2024-12-16T12:25:50.185Z] Total : 33043.05 129.07 0.00 0.00 1932.81 1071.26 6856.07 00:13:43.716 12:25:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:43.716 12:25:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:43.716 12:25:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:43.716 12:25:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:43.716 12:25:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:43.716 { 00:13:43.716 "subsystems": [ 00:13:43.716 { 00:13:43.716 "subsystem": "bdev", 00:13:43.716 "config": [ 00:13:43.716 { 00:13:43.716 "params": { 00:13:43.716 "io_mechanism": "io_uring", 00:13:43.716 "conserve_cpu": true, 00:13:43.716 "filename": "/dev/nvme0n1", 00:13:43.716 "name": "xnvme_bdev" 00:13:43.716 }, 00:13:43.716 "method": "bdev_xnvme_create" 00:13:43.716 }, 00:13:43.716 { 00:13:43.716 "method": "bdev_wait_for_examine" 00:13:43.716 } 00:13:43.716 ] 00:13:43.716 } 00:13:43.716 ] 00:13:43.716 } 00:13:43.716 [2024-12-16 12:25:50.771081] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:43.716 [2024-12-16 12:25:50.771468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72240 ] 00:13:43.978 [2024-12-16 12:25:50.936293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.978 [2024-12-16 12:25:51.057413] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.240 Running I/O for 5 seconds... 00:13:46.572 33635.00 IOPS, 131.39 MiB/s [2024-12-16T12:25:54.621Z] 33593.00 IOPS, 131.22 MiB/s [2024-12-16T12:25:55.562Z] 33481.00 IOPS, 130.79 MiB/s [2024-12-16T12:25:56.504Z] 33690.50 IOPS, 131.60 MiB/s [2024-12-16T12:25:56.504Z] 33601.60 IOPS, 131.26 MiB/s 00:13:49.398 Latency(us) 00:13:49.398 [2024-12-16T12:25:56.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.398 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:49.398 xnvme_bdev : 5.01 33586.24 131.20 0.00 0.00 1901.14 406.45 9225.45 00:13:49.398 [2024-12-16T12:25:56.504Z] =================================================================================================================== 00:13:49.398 [2024-12-16T12:25:56.504Z] Total : 33586.24 131.20 0.00 0.00 1901.14 406.45 9225.45 00:13:50.342 00:13:50.342 real 0m12.921s 00:13:50.342 user 0m8.324s 00:13:50.342 sys 0m4.030s 00:13:50.342 12:25:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.342 12:25:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:50.342 ************************************ 00:13:50.342 END TEST xnvme_bdevperf 00:13:50.342 ************************************ 00:13:50.342 12:25:57 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:50.342 12:25:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:50.342 12:25:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.342 12:25:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:50.342 ************************************ 00:13:50.342 START TEST xnvme_fio_plugin 00:13:50.342 ************************************ 00:13:50.342 12:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:50.342 12:25:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:50.342 12:25:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:50.342 12:25:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:50.342 12:25:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:50.342 12:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:50.342 12:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:50.342 12:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:50.342 12:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:50.342 12:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:50.342 12:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:50.342 12:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:50.342 12:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:50.342 12:25:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:50.342 12:25:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:50.342 12:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:50.342 12:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:50.342 12:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:50.342 12:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:50.342 12:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:50.342 12:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:50.342 12:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:50.342 12:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:50.342 12:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:50.342 { 00:13:50.342 "subsystems": [ 00:13:50.342 { 00:13:50.342 "subsystem": "bdev", 00:13:50.342 "config": [ 00:13:50.342 { 00:13:50.342 "params": { 00:13:50.342 "io_mechanism": "io_uring", 00:13:50.342 "conserve_cpu": true, 00:13:50.342 "filename": "/dev/nvme0n1", 00:13:50.342 "name": "xnvme_bdev" 00:13:50.342 }, 00:13:50.342 "method": "bdev_xnvme_create" 00:13:50.342 }, 00:13:50.342 { 00:13:50.342 "method": "bdev_wait_for_examine" 00:13:50.342 } 00:13:50.342 ] 00:13:50.342 } 00:13:50.342 ] 00:13:50.342 } 00:13:50.342 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:50.342 fio-3.35 00:13:50.342 Starting 1 thread 00:13:56.933 00:13:56.933 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72354: Mon Dec 16 12:26:03 2024 00:13:56.933 read: IOPS=31.8k, BW=124MiB/s (130MB/s)(621MiB/5001msec) 00:13:56.933 slat (usec): min=2, max=387, avg= 3.62, stdev= 2.48 00:13:56.933 clat (usec): min=1165, max=6392, avg=1866.43, stdev=298.08 00:13:56.933 lat (usec): min=1168, max=6396, avg=1870.05, stdev=298.40 00:13:56.933 clat percentiles (usec): 00:13:56.933 | 1.00th=[ 1352], 5.00th=[ 1450], 10.00th=[ 1516], 20.00th=[ 1614], 00:13:56.933 | 30.00th=[ 1696], 40.00th=[ 1762], 50.00th=[ 1827], 60.00th=[ 1909], 00:13:56.933 | 70.00th=[ 1991], 80.00th=[ 2089], 90.00th=[ 2245], 95.00th=[ 2409], 00:13:56.933 | 99.00th=[ 2704], 99.50th=[ 2900], 99.90th=[ 3326], 99.95th=[ 3490], 00:13:56.933 | 99.99th=[ 4686] 00:13:56.933 bw ( KiB/s): min=122864, max=130560, per=100.00%, avg=127173.78, stdev=2285.27, samples=9 00:13:56.933 iops : min=30716, max=32640, avg=31793.44, stdev=571.32, samples=9 00:13:56.933 lat (msec) : 2=71.40%, 4=28.55%, 10=0.04% 00:13:56.933 cpu : usr=58.93%, sys=36.87%, ctx=26, majf=0, minf=762 00:13:56.934 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:56.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.934 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:56.934 issued rwts: total=158910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.934 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:56.934 00:13:56.934 Run status group 0 (all jobs): 00:13:56.934 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=621MiB (651MB), run=5001-5001msec 00:13:57.195 ----------------------------------------------------- 00:13:57.195 Suppressions used: 00:13:57.195 count bytes template 00:13:57.195 1 11 /usr/src/fio/parse.c 00:13:57.195 1 8 libtcmalloc_minimal.so 00:13:57.195 1 904 libcrypto.so 00:13:57.195 ----------------------------------------------------- 00:13:57.195 00:13:57.195 12:26:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:57.195 12:26:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:57.195 12:26:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:57.195 12:26:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:57.196 12:26:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:57.196 12:26:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:57.196 12:26:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:57.196 12:26:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:57.196 12:26:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:57.196 12:26:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:57.196 12:26:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:57.196 12:26:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:57.196 12:26:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:57.196 12:26:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:57.196 12:26:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:57.196 12:26:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:57.196 { 00:13:57.196 "subsystems": [ 00:13:57.196 { 00:13:57.196 "subsystem": "bdev", 00:13:57.196 "config": [ 00:13:57.196 { 00:13:57.196 "params": { 00:13:57.196 "io_mechanism": "io_uring", 00:13:57.196 "conserve_cpu": true, 00:13:57.196 "filename": "/dev/nvme0n1", 00:13:57.196 "name": "xnvme_bdev" 00:13:57.196 }, 00:13:57.196 "method": "bdev_xnvme_create" 00:13:57.196 }, 00:13:57.196 { 00:13:57.196 "method": "bdev_wait_for_examine" 00:13:57.196 } 00:13:57.196 ] 00:13:57.196 } 00:13:57.196 ] 00:13:57.196 } 00:13:57.196 12:26:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:57.196 12:26:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:57.196 12:26:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:57.196 12:26:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:57.196 12:26:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:57.458 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:57.458 fio-3.35 00:13:57.458 Starting 1 thread 00:14:04.048 00:14:04.048 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72451: Mon Dec 16 12:26:10 2024 00:14:04.048 write: IOPS=37.4k, BW=146MiB/s (153MB/s)(730MiB/5001msec); 0 zone resets 00:14:04.048 slat (usec): min=2, max=398, avg= 3.58, stdev= 2.49 00:14:04.048 clat (usec): min=733, max=4430, avg=1569.19, stdev=298.43 00:14:04.048 lat (usec): min=736, max=4433, avg=1572.77, stdev=298.88 00:14:04.048 clat percentiles (usec): 00:14:04.048 | 1.00th=[ 1074], 5.00th=[ 1172], 10.00th=[ 1237], 20.00th=[ 1319], 00:14:04.048 | 30.00th=[ 1385], 40.00th=[ 1467], 50.00th=[ 1532], 60.00th=[ 1598], 00:14:04.048 | 70.00th=[ 1680], 80.00th=[ 1795], 90.00th=[ 1958], 95.00th=[ 2114], 00:14:04.048 | 99.00th=[ 2474], 99.50th=[ 2606], 99.90th=[ 3032], 99.95th=[ 3294], 00:14:04.048 | 99.99th=[ 3654] 00:14:04.048 bw ( KiB/s): min=130794, max=159632, per=98.70%, avg=147626.00, stdev=9656.17, samples=9 00:14:04.048 iops : min=32698, max=39908, avg=36906.44, stdev=2414.15, samples=9 00:14:04.048 lat (usec) : 750=0.01%, 1000=0.19% 00:14:04.048 lat (msec) : 2=91.53%, 4=8.27%, 10=0.01% 00:14:04.048 cpu : usr=63.36%, sys=32.86%, ctx=44, majf=0, minf=763 00:14:04.048 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:14:04.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.048 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:04.048 issued rwts: total=0,186998,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.048 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:04.048 00:14:04.048 Run status group 0 (all jobs): 00:14:04.048 WRITE: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=730MiB (766MB), run=5001-5001msec 00:14:04.048 ----------------------------------------------------- 00:14:04.048 Suppressions used: 00:14:04.048 count bytes template 00:14:04.048 1 11 /usr/src/fio/parse.c 00:14:04.048 1 8 libtcmalloc_minimal.so 00:14:04.048 1 904 libcrypto.so 00:14:04.048 ----------------------------------------------------- 00:14:04.048 00:14:04.048 ************************************ 00:14:04.048 END TEST xnvme_fio_plugin 00:14:04.048 ************************************ 00:14:04.048 00:14:04.048 real 0m13.732s 00:14:04.048 user 0m8.918s 00:14:04.048 sys 0m4.098s 00:14:04.048 12:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:04.048 12:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:04.048 12:26:10 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:04.048 12:26:10 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:04.049 12:26:10 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:04.049 12:26:10 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:04.049 12:26:10 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:04.049 12:26:10 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:04.049 12:26:10 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:04.049 12:26:10 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:04.049 12:26:10 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:04.049 12:26:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:04.049 12:26:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:04.049 12:26:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:04.049 ************************************ 00:14:04.049 START TEST xnvme_rpc 00:14:04.049 ************************************ 00:14:04.049 12:26:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:04.049 12:26:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:04.049 12:26:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:04.049 12:26:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:04.049 12:26:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:04.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.049 12:26:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72532 00:14:04.049 12:26:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72532 00:14:04.049 12:26:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72532 ']' 00:14:04.049 12:26:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.049 12:26:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:04.049 12:26:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:04.049 12:26:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.049 12:26:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:04.049 12:26:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.049 [2024-12-16 12:26:11.067910] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:04.049 [2024-12-16 12:26:11.068028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72532 ] 00:14:04.310 [2024-12-16 12:26:11.225675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.310 [2024-12-16 12:26:11.323830] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.880 12:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.880 12:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:04.880 12:26:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:04.880 12:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.880 12:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.880 xnvme_bdev 00:14:04.880 12:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.880 12:26:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:04.880 12:26:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:04.880 12:26:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:04.880 12:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.880 12:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.880 12:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.880 12:26:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:04.880 12:26:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:04.880 12:26:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:04.880 12:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.880 12:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.880 12:26:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:04.880 12:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.140 12:26:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:05.140 12:26:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:05.140 12:26:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:05.140 12:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.140 12:26:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.140 12:26:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:05.140 12:26:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.140 12:26:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:05.141 12:26:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:05.141 12:26:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:05.141 12:26:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.141 12:26:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.141 12:26:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:05.141 12:26:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.141 12:26:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:05.141 12:26:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:05.141 12:26:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.141 12:26:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.141 12:26:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.141 12:26:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72532 00:14:05.141 12:26:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72532 ']' 00:14:05.141 12:26:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72532 00:14:05.141 12:26:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:05.141 12:26:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:05.141 12:26:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72532 00:14:05.141 killing process with pid 72532 00:14:05.141 12:26:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:05.141 12:26:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:05.141 12:26:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72532' 00:14:05.141 12:26:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72532 00:14:05.141 12:26:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72532 00:14:07.053 ************************************ 00:14:07.053 END TEST xnvme_rpc 00:14:07.053 ************************************ 00:14:07.053 00:14:07.053 real 0m2.744s 00:14:07.053 user 0m2.844s 00:14:07.053 sys 0m0.357s 00:14:07.053 12:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.053 12:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.053 12:26:13 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:07.053 12:26:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:07.053 12:26:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:07.053 12:26:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.053 ************************************ 00:14:07.053 START TEST xnvme_bdevperf 00:14:07.053 ************************************ 00:14:07.053 12:26:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:07.053 12:26:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:07.053 12:26:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:07.053 12:26:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:07.053 12:26:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:07.053 12:26:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:07.053 12:26:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:07.053 12:26:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:07.053 { 00:14:07.053 "subsystems": [ 00:14:07.053 { 00:14:07.053 "subsystem": "bdev", 00:14:07.053 "config": [ 00:14:07.053 { 00:14:07.053 "params": { 00:14:07.053 "io_mechanism": "io_uring_cmd", 00:14:07.053 "conserve_cpu": false, 00:14:07.053 "filename": "/dev/ng0n1", 00:14:07.053 "name": "xnvme_bdev" 00:14:07.053 }, 00:14:07.053 "method": "bdev_xnvme_create" 00:14:07.053 }, 00:14:07.053 { 00:14:07.053 "method": "bdev_wait_for_examine" 00:14:07.053 } 00:14:07.053 ] 00:14:07.053 } 00:14:07.053 ] 00:14:07.053 } 00:14:07.053 [2024-12-16 12:26:13.883364] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:07.053 [2024-12-16 12:26:13.883712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72606 ] 00:14:07.053 [2024-12-16 12:26:14.038955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.053 [2024-12-16 12:26:14.134378] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.315 Running I/O for 5 seconds... 00:14:09.643 37263.00 IOPS, 145.56 MiB/s [2024-12-16T12:26:17.692Z] 37155.00 IOPS, 145.14 MiB/s [2024-12-16T12:26:18.637Z] 36567.33 IOPS, 142.84 MiB/s [2024-12-16T12:26:19.583Z] 35808.75 IOPS, 139.88 MiB/s 00:14:12.477 Latency(us) 00:14:12.477 [2024-12-16T12:26:19.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.477 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:12.477 xnvme_bdev : 5.00 35229.83 137.62 0.00 0.00 1812.76 310.35 7662.67 00:14:12.477 [2024-12-16T12:26:19.583Z] =================================================================================================================== 00:14:12.477 [2024-12-16T12:26:19.583Z] Total : 35229.83 137.62 0.00 0.00 1812.76 310.35 7662.67 00:14:13.093 12:26:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:13.093 12:26:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:13.093 12:26:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:13.093 12:26:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:13.093 12:26:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:13.359 { 00:14:13.359 "subsystems": [ 00:14:13.359 { 00:14:13.359 "subsystem": "bdev", 00:14:13.360 "config": [ 00:14:13.360 { 00:14:13.360 "params": { 00:14:13.360 "io_mechanism": "io_uring_cmd", 00:14:13.360 "conserve_cpu": false, 00:14:13.360 "filename": "/dev/ng0n1", 00:14:13.360 "name": "xnvme_bdev" 00:14:13.360 }, 00:14:13.360 "method": "bdev_xnvme_create" 00:14:13.360 }, 00:14:13.360 { 00:14:13.360 "method": "bdev_wait_for_examine" 00:14:13.360 } 00:14:13.360 ] 00:14:13.360 } 00:14:13.360 ] 00:14:13.360 } 00:14:13.360 [2024-12-16 12:26:20.267067] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:13.360 [2024-12-16 12:26:20.267254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72680 ] 00:14:13.360 [2024-12-16 12:26:20.432083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.621 [2024-12-16 12:26:20.548702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.883 Running I/O for 5 seconds... 00:14:15.771 34450.00 IOPS, 134.57 MiB/s [2024-12-16T12:26:23.894Z] 34594.50 IOPS, 135.13 MiB/s [2024-12-16T12:26:25.281Z] 34320.67 IOPS, 134.07 MiB/s [2024-12-16T12:26:25.852Z] 35076.00 IOPS, 137.02 MiB/s 00:14:18.746 Latency(us) 00:14:18.746 [2024-12-16T12:26:25.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.746 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:18.746 xnvme_bdev : 5.00 34840.89 136.10 0.00 0.00 1832.73 371.79 7007.31 00:14:18.746 [2024-12-16T12:26:25.852Z] =================================================================================================================== 00:14:18.746 [2024-12-16T12:26:25.852Z] Total : 34840.89 136.10 0.00 0.00 1832.73 371.79 7007.31 00:14:19.689 12:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:19.689 12:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:19.689 12:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:19.689 12:26:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:19.689 12:26:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:19.689 { 00:14:19.689 "subsystems": [ 00:14:19.689 { 00:14:19.689 "subsystem": "bdev", 00:14:19.689 "config": [ 00:14:19.689 { 00:14:19.689 "params": { 00:14:19.689 "io_mechanism": "io_uring_cmd", 00:14:19.689 "conserve_cpu": false, 00:14:19.689 "filename": "/dev/ng0n1", 00:14:19.689 "name": "xnvme_bdev" 00:14:19.689 }, 00:14:19.689 "method": "bdev_xnvme_create" 00:14:19.689 }, 00:14:19.689 { 00:14:19.689 "method": "bdev_wait_for_examine" 00:14:19.689 } 00:14:19.689 ] 00:14:19.689 } 00:14:19.689 ] 00:14:19.689 } 00:14:19.689 [2024-12-16 12:26:26.689803] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:19.689 [2024-12-16 12:26:26.690206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72753 ] 00:14:19.950 [2024-12-16 12:26:26.854099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.950 [2024-12-16 12:26:26.973464] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.212 Running I/O for 5 seconds... 00:14:22.543 79552.00 IOPS, 310.75 MiB/s [2024-12-16T12:26:30.594Z] 79520.00 IOPS, 310.62 MiB/s [2024-12-16T12:26:31.534Z] 79402.67 IOPS, 310.17 MiB/s [2024-12-16T12:26:32.477Z] 79408.00 IOPS, 310.19 MiB/s [2024-12-16T12:26:32.477Z] 77452.80 IOPS, 302.55 MiB/s 00:14:25.371 Latency(us) 00:14:25.371 [2024-12-16T12:26:32.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.371 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:25.371 xnvme_bdev : 5.00 77427.71 302.45 0.00 0.00 823.19 500.97 3654.89 00:14:25.371 [2024-12-16T12:26:32.477Z] =================================================================================================================== 00:14:25.371 [2024-12-16T12:26:32.477Z] Total : 77427.71 302.45 0.00 0.00 823.19 500.97 3654.89 00:14:25.942 12:26:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:25.942 12:26:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:25.942 12:26:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:25.942 12:26:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:25.942 12:26:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:25.942 { 00:14:25.942 "subsystems": [ 00:14:25.942 { 00:14:25.942 "subsystem": "bdev", 00:14:25.942 "config": [ 00:14:25.942 { 00:14:25.942 "params": { 00:14:25.942 "io_mechanism": "io_uring_cmd", 00:14:25.942 "conserve_cpu": false, 00:14:25.942 "filename": "/dev/ng0n1", 00:14:25.942 "name": "xnvme_bdev" 00:14:25.942 }, 00:14:25.942 "method": "bdev_xnvme_create" 00:14:25.942 }, 00:14:25.942 { 00:14:25.942 "method": "bdev_wait_for_examine" 00:14:25.942 } 00:14:25.942 ] 00:14:25.943 } 00:14:25.943 ] 00:14:25.943 } 00:14:25.943 [2024-12-16 12:26:33.044348] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:25.943 [2024-12-16 12:26:33.044458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72829 ] 00:14:26.204 [2024-12-16 12:26:33.203692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.204 [2024-12-16 12:26:33.302089] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.463 Running I/O for 5 seconds... 00:14:28.782 47572.00 IOPS, 185.83 MiB/s [2024-12-16T12:26:36.826Z] 43197.00 IOPS, 168.74 MiB/s [2024-12-16T12:26:37.765Z] 42201.33 IOPS, 164.85 MiB/s [2024-12-16T12:26:38.706Z] 41394.75 IOPS, 161.70 MiB/s [2024-12-16T12:26:38.706Z] 40816.20 IOPS, 159.44 MiB/s 00:14:31.600 Latency(us) 00:14:31.600 [2024-12-16T12:26:38.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.600 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:31.600 xnvme_bdev : 5.00 40801.22 159.38 0.00 0.00 1564.45 172.50 21173.17 00:14:31.600 [2024-12-16T12:26:38.706Z] =================================================================================================================== 00:14:31.600 [2024-12-16T12:26:38.706Z] Total : 40801.22 159.38 0.00 0.00 1564.45 172.50 21173.17 00:14:32.544 00:14:32.544 real 0m25.540s 00:14:32.544 user 0m13.777s 00:14:32.544 sys 0m11.236s 00:14:32.544 12:26:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.544 ************************************ 00:14:32.544 END TEST xnvme_bdevperf 00:14:32.544 ************************************ 00:14:32.544 12:26:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:32.544 12:26:39 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:32.544 12:26:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:32.544 12:26:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.544 12:26:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:32.544 ************************************ 00:14:32.544 START TEST xnvme_fio_plugin 00:14:32.544 ************************************ 00:14:32.544 12:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:32.544 12:26:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:32.544 12:26:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:32.544 12:26:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:32.544 12:26:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:32.544 12:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:32.544 12:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:32.544 12:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:32.544 12:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:32.544 12:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:32.544 12:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:32.544 12:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:32.544 12:26:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:32.544 12:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:32.544 12:26:39 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:32.544 12:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:32.544 12:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:32.544 12:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:32.544 12:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:32.544 12:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:32.544 12:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:32.544 12:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:32.544 12:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:32.544 12:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:32.544 { 00:14:32.544 "subsystems": [ 00:14:32.544 { 00:14:32.544 "subsystem": "bdev", 00:14:32.544 "config": [ 00:14:32.544 { 00:14:32.544 "params": { 00:14:32.544 "io_mechanism": "io_uring_cmd", 00:14:32.544 "conserve_cpu": false, 00:14:32.544 "filename": "/dev/ng0n1", 00:14:32.544 "name": "xnvme_bdev" 00:14:32.544 }, 00:14:32.544 "method": "bdev_xnvme_create" 00:14:32.544 }, 00:14:32.544 { 00:14:32.544 "method": "bdev_wait_for_examine" 00:14:32.544 } 00:14:32.544 ] 00:14:32.544 } 00:14:32.544 ] 00:14:32.544 } 00:14:32.544 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:32.544 fio-3.35 00:14:32.544 Starting 1 thread 00:14:39.136 00:14:39.136 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72947: Mon Dec 16 12:26:45 2024 00:14:39.136 read: IOPS=33.8k, BW=132MiB/s (138MB/s)(660MiB/5001msec) 00:14:39.136 slat (nsec): min=2885, max=80028, avg=3633.72, stdev=1963.79 00:14:39.136 clat (usec): min=961, max=3248, avg=1747.85, stdev=261.39 00:14:39.136 lat (usec): min=964, max=3279, avg=1751.48, stdev=261.82 00:14:39.136 clat percentiles (usec): 00:14:39.136 | 1.00th=[ 1254], 5.00th=[ 1385], 10.00th=[ 1434], 20.00th=[ 1532], 00:14:39.136 | 30.00th=[ 1598], 40.00th=[ 1663], 50.00th=[ 1729], 60.00th=[ 1778], 00:14:39.136 | 70.00th=[ 1860], 80.00th=[ 1942], 90.00th=[ 2089], 95.00th=[ 2212], 00:14:39.136 | 99.00th=[ 2507], 99.50th=[ 2638], 99.90th=[ 2868], 99.95th=[ 2966], 00:14:39.136 | 99.99th=[ 3097] 00:14:39.136 bw ( KiB/s): min=132096, max=138752, per=100.00%, avg=135168.00, stdev=2521.31, samples=9 00:14:39.136 iops : min=33024, max=34688, avg=33792.00, stdev=630.33, samples=9 00:14:39.136 lat (usec) : 1000=0.01% 00:14:39.136 lat (msec) : 2=84.61%, 4=15.38% 00:14:39.136 cpu : usr=37.28%, sys=61.52%, ctx=11, majf=0, minf=762 00:14:39.136 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:39.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.136 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:39.136 issued rwts: total=168960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:39.136 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:39.136 00:14:39.136 Run status group 0 (all jobs): 00:14:39.136 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=660MiB (692MB), run=5001-5001msec 00:14:39.397 ----------------------------------------------------- 00:14:39.397 Suppressions used: 00:14:39.397 count bytes template 00:14:39.397 1 11 /usr/src/fio/parse.c 00:14:39.397 1 8 libtcmalloc_minimal.so 00:14:39.397 1 904 libcrypto.so 00:14:39.397 ----------------------------------------------------- 00:14:39.397 00:14:39.397 12:26:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:39.397 12:26:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:39.397 12:26:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:39.397 12:26:46 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:39.397 12:26:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:39.397 12:26:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:39.397 12:26:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:39.397 12:26:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:39.397 12:26:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:39.397 12:26:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:39.397 12:26:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:39.397 12:26:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:39.397 12:26:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:39.397 12:26:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:39.397 12:26:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:39.397 12:26:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:39.397 { 00:14:39.397 "subsystems": [ 00:14:39.397 { 00:14:39.397 "subsystem": "bdev", 00:14:39.397 "config": [ 00:14:39.397 { 00:14:39.397 "params": { 00:14:39.397 "io_mechanism": "io_uring_cmd", 00:14:39.397 "conserve_cpu": false, 00:14:39.397 "filename": "/dev/ng0n1", 00:14:39.397 "name": "xnvme_bdev" 00:14:39.397 }, 00:14:39.397 "method": "bdev_xnvme_create" 00:14:39.397 }, 00:14:39.397 { 00:14:39.397 "method": "bdev_wait_for_examine" 00:14:39.397 } 00:14:39.397 ] 00:14:39.397 } 00:14:39.397 ] 00:14:39.397 } 00:14:39.397 12:26:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:39.397 12:26:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:39.397 12:26:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:39.397 12:26:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:39.397 12:26:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:39.658 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:39.658 fio-3.35 00:14:39.658 Starting 1 thread 00:14:46.243 00:14:46.243 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73032: Mon Dec 16 12:26:52 2024 00:14:46.243 write: IOPS=36.2k, BW=141MiB/s (148MB/s)(708MiB/5002msec); 0 zone resets 00:14:46.243 slat (usec): min=2, max=313, avg= 3.60, stdev= 2.39 00:14:46.243 clat (usec): min=406, max=5074, avg=1622.71, stdev=288.95 00:14:46.243 lat (usec): min=410, max=5081, avg=1626.30, stdev=289.28 00:14:46.243 clat percentiles (usec): 00:14:46.243 | 1.00th=[ 1106], 5.00th=[ 1205], 10.00th=[ 1287], 20.00th=[ 1385], 00:14:46.243 | 30.00th=[ 1467], 40.00th=[ 1532], 50.00th=[ 1598], 60.00th=[ 1663], 00:14:46.243 | 70.00th=[ 1745], 80.00th=[ 1827], 90.00th=[ 1975], 95.00th=[ 2114], 00:14:46.243 | 99.00th=[ 2442], 99.50th=[ 2638], 99.90th=[ 3326], 99.95th=[ 3654], 00:14:46.243 | 99.99th=[ 4621] 00:14:46.243 bw ( KiB/s): min=135120, max=165376, per=98.73%, avg=143018.67, stdev=9402.82, samples=9 00:14:46.243 iops : min=33780, max=41344, avg=35754.67, stdev=2350.70, samples=9 00:14:46.243 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.19% 00:14:46.243 lat (msec) : 2=90.90%, 4=8.84%, 10=0.02% 00:14:46.244 cpu : usr=39.01%, sys=59.19%, ctx=77, majf=0, minf=763 00:14:46.244 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.3%, 16=24.7%, 32=50.8%, >=64=1.6% 00:14:46.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.244 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:46.244 issued rwts: total=0,181148,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:46.244 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:46.244 00:14:46.244 Run status group 0 (all jobs): 00:14:46.244 WRITE: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=708MiB (742MB), run=5002-5002msec 00:14:46.244 ----------------------------------------------------- 00:14:46.244 Suppressions used: 00:14:46.244 count bytes template 00:14:46.244 1 11 /usr/src/fio/parse.c 00:14:46.244 1 8 libtcmalloc_minimal.so 00:14:46.244 1 904 libcrypto.so 00:14:46.244 ----------------------------------------------------- 00:14:46.244 00:14:46.244 00:14:46.244 real 0m13.791s 00:14:46.244 user 0m6.705s 00:14:46.244 sys 0m6.600s 00:14:46.244 12:26:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:46.244 ************************************ 00:14:46.244 END TEST xnvme_fio_plugin 00:14:46.244 12:26:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:46.244 ************************************ 00:14:46.244 12:26:53 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:46.244 12:26:53 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:46.244 12:26:53 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:46.244 12:26:53 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:46.244 12:26:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:46.244 12:26:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:46.244 12:26:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:46.244 ************************************ 00:14:46.244 START TEST xnvme_rpc 00:14:46.244 ************************************ 00:14:46.244 12:26:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:46.244 12:26:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:46.244 12:26:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:46.244 12:26:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:46.244 12:26:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:46.244 12:26:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73117 00:14:46.244 12:26:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73117 00:14:46.244 12:26:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73117 ']' 00:14:46.244 12:26:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:46.244 12:26:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.244 12:26:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.244 12:26:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.244 12:26:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.244 12:26:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.504 [2024-12-16 12:26:53.365575] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:46.505 [2024-12-16 12:26:53.365724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73117 ] 00:14:46.505 [2024-12-16 12:26:53.522245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.765 [2024-12-16 12:26:53.639659] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.337 xnvme_bdev 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.337 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.598 12:26:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:47.598 12:26:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:47.598 12:26:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:47.598 12:26:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:47.598 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.598 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.599 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.599 12:26:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:47.599 12:26:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:47.599 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.599 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.599 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.599 12:26:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73117 00:14:47.599 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73117 ']' 00:14:47.599 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73117 00:14:47.599 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:47.599 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.599 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73117 00:14:47.599 killing process with pid 73117 00:14:47.599 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:47.599 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:47.599 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73117' 00:14:47.599 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73117 00:14:47.599 12:26:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73117 00:14:49.510 ************************************ 00:14:49.510 END TEST xnvme_rpc 00:14:49.510 ************************************ 00:14:49.510 00:14:49.510 real 0m2.892s 00:14:49.510 user 0m2.897s 00:14:49.510 sys 0m0.476s 00:14:49.510 12:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:49.510 12:26:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:49.510 12:26:56 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:49.510 12:26:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:49.510 12:26:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:49.510 12:26:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:49.510 ************************************ 00:14:49.510 START TEST xnvme_bdevperf 00:14:49.510 ************************************ 00:14:49.510 12:26:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:49.510 12:26:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:49.510 12:26:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:49.510 12:26:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:49.511 12:26:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:49.511 12:26:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:49.511 12:26:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:49.511 12:26:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:49.511 { 00:14:49.511 "subsystems": [ 00:14:49.511 { 00:14:49.511 "subsystem": "bdev", 00:14:49.511 "config": [ 00:14:49.511 { 00:14:49.511 "params": { 00:14:49.511 "io_mechanism": "io_uring_cmd", 00:14:49.511 "conserve_cpu": true, 00:14:49.511 "filename": "/dev/ng0n1", 00:14:49.511 "name": "xnvme_bdev" 00:14:49.511 }, 00:14:49.511 "method": "bdev_xnvme_create" 00:14:49.511 }, 00:14:49.511 { 00:14:49.511 "method": "bdev_wait_for_examine" 00:14:49.511 } 00:14:49.511 ] 00:14:49.511 } 00:14:49.511 ] 00:14:49.511 } 00:14:49.511 [2024-12-16 12:26:56.314642] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:49.511 [2024-12-16 12:26:56.314785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73186 ] 00:14:49.511 [2024-12-16 12:26:56.477187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.511 [2024-12-16 12:26:56.601707] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.083 Running I/O for 5 seconds... 00:14:51.970 33664.00 IOPS, 131.50 MiB/s [2024-12-16T12:27:00.021Z] 33888.00 IOPS, 132.38 MiB/s [2024-12-16T12:27:00.963Z] 34005.33 IOPS, 132.83 MiB/s [2024-12-16T12:27:01.906Z] 33960.00 IOPS, 132.66 MiB/s [2024-12-16T12:27:01.906Z] 33875.20 IOPS, 132.32 MiB/s 00:14:54.800 Latency(us) 00:14:54.800 [2024-12-16T12:27:01.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.800 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:54.800 xnvme_bdev : 5.00 33864.34 132.28 0.00 0.00 1885.92 913.72 8721.33 00:14:54.800 [2024-12-16T12:27:01.906Z] =================================================================================================================== 00:14:54.800 [2024-12-16T12:27:01.906Z] Total : 33864.34 132.28 0.00 0.00 1885.92 913.72 8721.33 00:14:55.743 12:27:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:55.743 12:27:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:55.743 12:27:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:55.743 12:27:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:55.743 12:27:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:55.743 { 00:14:55.743 "subsystems": [ 00:14:55.743 { 00:14:55.743 "subsystem": "bdev", 00:14:55.743 "config": [ 00:14:55.743 { 00:14:55.743 "params": { 00:14:55.743 "io_mechanism": "io_uring_cmd", 00:14:55.743 "conserve_cpu": true, 00:14:55.743 "filename": "/dev/ng0n1", 00:14:55.743 "name": "xnvme_bdev" 00:14:55.743 }, 00:14:55.743 "method": "bdev_xnvme_create" 00:14:55.743 }, 00:14:55.743 { 00:14:55.743 "method": "bdev_wait_for_examine" 00:14:55.743 } 00:14:55.743 ] 00:14:55.743 } 00:14:55.743 ] 00:14:55.743 } 00:14:55.743 [2024-12-16 12:27:02.762760] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:55.743 [2024-12-16 12:27:02.762903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73260 ] 00:14:56.003 [2024-12-16 12:27:02.927816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.003 [2024-12-16 12:27:03.044927] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.265 Running I/O for 5 seconds... 00:14:58.590 34788.00 IOPS, 135.89 MiB/s [2024-12-16T12:27:06.638Z] 34754.50 IOPS, 135.76 MiB/s [2024-12-16T12:27:07.578Z] 34780.67 IOPS, 135.86 MiB/s [2024-12-16T12:27:08.521Z] 34796.50 IOPS, 135.92 MiB/s [2024-12-16T12:27:08.521Z] 34838.20 IOPS, 136.09 MiB/s 00:15:01.415 Latency(us) 00:15:01.415 [2024-12-16T12:27:08.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.415 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:01.415 xnvme_bdev : 5.01 34815.43 136.00 0.00 0.00 1833.86 882.22 6200.71 00:15:01.415 [2024-12-16T12:27:08.521Z] =================================================================================================================== 00:15:01.415 [2024-12-16T12:27:08.521Z] Total : 34815.43 136.00 0.00 0.00 1833.86 882.22 6200.71 00:15:02.357 12:27:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:02.357 12:27:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:02.357 12:27:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:02.357 12:27:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:02.357 12:27:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:02.357 { 00:15:02.357 "subsystems": [ 00:15:02.357 { 00:15:02.357 "subsystem": "bdev", 00:15:02.357 "config": [ 00:15:02.357 { 00:15:02.357 "params": { 00:15:02.357 "io_mechanism": "io_uring_cmd", 00:15:02.357 "conserve_cpu": true, 00:15:02.357 "filename": "/dev/ng0n1", 00:15:02.357 "name": "xnvme_bdev" 00:15:02.357 }, 00:15:02.357 "method": "bdev_xnvme_create" 00:15:02.357 }, 00:15:02.357 { 00:15:02.357 "method": "bdev_wait_for_examine" 00:15:02.357 } 00:15:02.357 ] 00:15:02.357 } 00:15:02.357 ] 00:15:02.357 } 00:15:02.357 [2024-12-16 12:27:09.205227] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:02.357 [2024-12-16 12:27:09.205588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73340 ] 00:15:02.357 [2024-12-16 12:27:09.371040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.618 [2024-12-16 12:27:09.488929] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.879 Running I/O for 5 seconds... 00:15:04.761 79680.00 IOPS, 311.25 MiB/s [2024-12-16T12:27:12.868Z] 79840.00 IOPS, 311.88 MiB/s [2024-12-16T12:27:13.867Z] 79701.33 IOPS, 311.33 MiB/s [2024-12-16T12:27:14.802Z] 79504.00 IOPS, 310.56 MiB/s [2024-12-16T12:27:14.802Z] 81984.00 IOPS, 320.25 MiB/s 00:15:07.696 Latency(us) 00:15:07.696 [2024-12-16T12:27:14.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.696 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:07.696 xnvme_bdev : 5.00 81951.30 320.12 0.00 0.00 777.51 335.56 2671.85 00:15:07.696 [2024-12-16T12:27:14.802Z] =================================================================================================================== 00:15:07.696 [2024-12-16T12:27:14.802Z] Total : 81951.30 320.12 0.00 0.00 777.51 335.56 2671.85 00:15:08.265 12:27:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:08.265 12:27:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:08.265 12:27:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:08.265 12:27:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:08.265 12:27:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:08.265 { 00:15:08.265 "subsystems": [ 00:15:08.265 { 00:15:08.265 "subsystem": "bdev", 00:15:08.266 "config": [ 00:15:08.266 { 00:15:08.266 "params": { 00:15:08.266 "io_mechanism": "io_uring_cmd", 00:15:08.266 "conserve_cpu": true, 00:15:08.266 "filename": "/dev/ng0n1", 00:15:08.266 "name": "xnvme_bdev" 00:15:08.266 }, 00:15:08.266 "method": "bdev_xnvme_create" 00:15:08.266 }, 00:15:08.266 { 00:15:08.266 "method": "bdev_wait_for_examine" 00:15:08.266 } 00:15:08.266 ] 00:15:08.266 } 00:15:08.266 ] 00:15:08.266 } 00:15:08.525 [2024-12-16 12:27:15.390684] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:08.525 [2024-12-16 12:27:15.390792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73414 ] 00:15:08.525 [2024-12-16 12:27:15.547057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.786 [2024-12-16 12:27:15.631665] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.786 Running I/O for 5 seconds... 00:15:10.729 54229.00 IOPS, 211.83 MiB/s [2024-12-16T12:27:19.217Z] 55641.00 IOPS, 217.35 MiB/s [2024-12-16T12:27:20.159Z] 54120.67 IOPS, 211.41 MiB/s [2024-12-16T12:27:21.100Z] 50735.25 IOPS, 198.18 MiB/s [2024-12-16T12:27:21.100Z] 47076.60 IOPS, 183.89 MiB/s 00:15:13.994 Latency(us) 00:15:13.994 [2024-12-16T12:27:21.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.994 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:13.994 xnvme_bdev : 5.00 47051.83 183.80 0.00 0.00 1355.06 60.26 21778.12 00:15:13.994 [2024-12-16T12:27:21.100Z] =================================================================================================================== 00:15:13.994 [2024-12-16T12:27:21.100Z] Total : 47051.83 183.80 0.00 0.00 1355.06 60.26 21778.12 00:15:14.566 00:15:14.566 real 0m25.385s 00:15:14.566 user 0m17.261s 00:15:14.566 sys 0m6.068s 00:15:14.566 12:27:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.566 12:27:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:14.566 ************************************ 00:15:14.566 END TEST xnvme_bdevperf 00:15:14.566 ************************************ 00:15:14.827 12:27:21 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:14.827 12:27:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:14.827 12:27:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.827 12:27:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:14.827 ************************************ 00:15:14.827 START TEST xnvme_fio_plugin 00:15:14.827 ************************************ 00:15:14.827 12:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:14.827 12:27:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:14.827 12:27:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:14.827 12:27:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:14.827 12:27:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:14.827 12:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:14.827 12:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:14.827 12:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:14.827 12:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:14.827 12:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:14.827 12:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:14.827 12:27:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:14.827 12:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:14.827 12:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:14.827 12:27:21 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:14.827 12:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:14.827 12:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:14.827 12:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:14.827 12:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:14.827 12:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:14.827 12:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:14.827 12:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:14.827 12:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:14.827 12:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:14.827 { 00:15:14.827 "subsystems": [ 00:15:14.827 { 00:15:14.827 "subsystem": "bdev", 00:15:14.827 "config": [ 00:15:14.827 { 00:15:14.827 "params": { 00:15:14.827 "io_mechanism": "io_uring_cmd", 00:15:14.827 "conserve_cpu": true, 00:15:14.827 "filename": "/dev/ng0n1", 00:15:14.827 "name": "xnvme_bdev" 00:15:14.827 }, 00:15:14.827 "method": "bdev_xnvme_create" 00:15:14.827 }, 00:15:14.827 { 00:15:14.827 "method": "bdev_wait_for_examine" 00:15:14.827 } 00:15:14.827 ] 00:15:14.827 } 00:15:14.827 ] 00:15:14.827 } 00:15:14.827 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:14.827 fio-3.35 00:15:14.827 Starting 1 thread 00:15:21.420 00:15:21.420 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73527: Mon Dec 16 12:27:27 2024 00:15:21.420 read: IOPS=40.3k, BW=158MiB/s (165MB/s)(788MiB/5001msec) 00:15:21.420 slat (nsec): min=2885, max=73286, avg=3376.36, stdev=1646.34 00:15:21.420 clat (usec): min=847, max=6862, avg=1453.03, stdev=328.92 00:15:21.420 lat (usec): min=850, max=6865, avg=1456.40, stdev=329.36 00:15:21.420 clat percentiles (usec): 00:15:21.420 | 1.00th=[ 1004], 5.00th=[ 1074], 10.00th=[ 1123], 20.00th=[ 1172], 00:15:21.420 | 30.00th=[ 1221], 40.00th=[ 1287], 50.00th=[ 1369], 60.00th=[ 1467], 00:15:21.420 | 70.00th=[ 1582], 80.00th=[ 1713], 90.00th=[ 1909], 95.00th=[ 2057], 00:15:21.420 | 99.00th=[ 2409], 99.50th=[ 2573], 99.90th=[ 2966], 99.95th=[ 3228], 00:15:21.420 | 99.99th=[ 5407] 00:15:21.420 bw ( KiB/s): min=135168, max=188928, per=99.46%, avg=160423.11, stdev=24073.65, samples=9 00:15:21.420 iops : min=33792, max=47232, avg=40105.78, stdev=6018.41, samples=9 00:15:21.420 lat (usec) : 1000=1.00% 00:15:21.420 lat (msec) : 2=92.45%, 4=6.52%, 10=0.03% 00:15:21.420 cpu : usr=73.74%, sys=23.66%, ctx=9, majf=0, minf=762 00:15:21.420 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:21.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:21.420 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:21.420 issued rwts: total=201660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:21.420 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:21.420 00:15:21.420 Run status group 0 (all jobs): 00:15:21.420 READ: bw=158MiB/s (165MB/s), 158MiB/s-158MiB/s (165MB/s-165MB/s), io=788MiB (826MB), run=5001-5001msec 00:15:21.682 ----------------------------------------------------- 00:15:21.682 Suppressions used: 00:15:21.682 count bytes template 00:15:21.682 1 11 /usr/src/fio/parse.c 00:15:21.682 1 8 libtcmalloc_minimal.so 00:15:21.682 1 904 libcrypto.so 00:15:21.682 ----------------------------------------------------- 00:15:21.682 00:15:21.682 12:27:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:21.682 12:27:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:21.682 12:27:28 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:21.682 12:27:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:21.682 12:27:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:21.682 12:27:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:21.682 12:27:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:21.682 12:27:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:21.682 12:27:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:21.682 12:27:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:21.682 12:27:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:21.682 12:27:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:21.682 12:27:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:21.682 12:27:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:21.682 12:27:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:21.682 12:27:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:21.682 12:27:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:21.682 12:27:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:21.682 12:27:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:21.682 12:27:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:21.682 12:27:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:21.682 { 00:15:21.682 "subsystems": [ 00:15:21.682 { 00:15:21.682 "subsystem": "bdev", 00:15:21.682 "config": [ 00:15:21.682 { 00:15:21.682 "params": { 00:15:21.682 "io_mechanism": "io_uring_cmd", 00:15:21.682 "conserve_cpu": true, 00:15:21.682 "filename": "/dev/ng0n1", 00:15:21.682 "name": "xnvme_bdev" 00:15:21.682 }, 00:15:21.682 "method": "bdev_xnvme_create" 00:15:21.682 }, 00:15:21.682 { 00:15:21.682 "method": "bdev_wait_for_examine" 00:15:21.682 } 00:15:21.682 ] 00:15:21.682 } 00:15:21.682 ] 00:15:21.682 } 00:15:21.682 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:21.682 fio-3.35 00:15:21.682 Starting 1 thread 00:15:28.271 00:15:28.271 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73612: Mon Dec 16 12:27:34 2024 00:15:28.271 write: IOPS=38.1k, BW=149MiB/s (156MB/s)(746MiB/5004msec); 0 zone resets 00:15:28.271 slat (usec): min=2, max=136, avg= 4.01, stdev= 2.28 00:15:28.271 clat (usec): min=85, max=20916, avg=1518.86, stdev=746.27 00:15:28.271 lat (usec): min=88, max=20919, avg=1522.88, stdev=746.48 00:15:28.271 clat percentiles (usec): 00:15:28.271 | 1.00th=[ 979], 5.00th=[ 1090], 10.00th=[ 1156], 20.00th=[ 1237], 00:15:28.271 | 30.00th=[ 1319], 40.00th=[ 1385], 50.00th=[ 1450], 60.00th=[ 1516], 00:15:28.271 | 70.00th=[ 1598], 80.00th=[ 1696], 90.00th=[ 1844], 95.00th=[ 1975], 00:15:28.271 | 99.00th=[ 2409], 99.50th=[ 3556], 99.90th=[14746], 99.95th=[16712], 00:15:28.271 | 99.99th=[19268] 00:15:28.271 bw ( KiB/s): min=125736, max=170664, per=99.24%, avg=151400.89, stdev=13415.74, samples=9 00:15:28.271 iops : min=31434, max=42666, avg=37850.22, stdev=3353.93, samples=9 00:15:28.271 lat (usec) : 100=0.01%, 250=0.03%, 500=0.12%, 750=0.28%, 1000=0.81% 00:15:28.271 lat (msec) : 2=94.30%, 4=3.99%, 10=0.23%, 20=0.23%, 50=0.01% 00:15:28.271 cpu : usr=56.11%, sys=38.96%, ctx=32, majf=0, minf=763 00:15:28.271 IO depths : 1=1.4%, 2=3.0%, 4=6.0%, 8=12.2%, 16=24.7%, 32=50.8%, >=64=1.8% 00:15:28.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.271 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:28.271 issued rwts: total=0,190861,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.271 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:28.271 00:15:28.271 Run status group 0 (all jobs): 00:15:28.271 WRITE: bw=149MiB/s (156MB/s), 149MiB/s-149MiB/s (156MB/s-156MB/s), io=746MiB (782MB), run=5004-5004msec 00:15:28.532 ----------------------------------------------------- 00:15:28.532 Suppressions used: 00:15:28.532 count bytes template 00:15:28.533 1 11 /usr/src/fio/parse.c 00:15:28.533 1 8 libtcmalloc_minimal.so 00:15:28.533 1 904 libcrypto.so 00:15:28.533 ----------------------------------------------------- 00:15:28.533 00:15:28.533 00:15:28.533 real 0m13.779s 00:15:28.533 user 0m9.317s 00:15:28.533 sys 0m3.758s 00:15:28.533 12:27:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.533 ************************************ 00:15:28.533 END TEST xnvme_fio_plugin 00:15:28.533 ************************************ 00:15:28.533 12:27:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:28.533 Process with pid 73117 is not found 00:15:28.533 12:27:35 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73117 00:15:28.533 12:27:35 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73117 ']' 00:15:28.533 12:27:35 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73117 00:15:28.533 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73117) - No such process 00:15:28.533 12:27:35 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73117 is not found' 00:15:28.533 ************************************ 00:15:28.533 END TEST nvme_xnvme 00:15:28.533 ************************************ 00:15:28.533 00:15:28.533 real 3m31.053s 00:15:28.533 user 1m58.608s 00:15:28.533 sys 1m17.999s 00:15:28.533 12:27:35 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.533 12:27:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:28.533 12:27:35 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:28.533 12:27:35 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:28.533 12:27:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.533 12:27:35 -- common/autotest_common.sh@10 -- # set +x 00:15:28.533 ************************************ 00:15:28.533 START TEST blockdev_xnvme 00:15:28.533 ************************************ 00:15:28.533 12:27:35 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:28.795 * Looking for test storage... 00:15:28.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:28.795 12:27:35 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:28.795 12:27:35 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:28.795 12:27:35 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:28.795 12:27:35 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:28.795 12:27:35 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:28.795 12:27:35 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.795 12:27:35 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:28.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.795 --rc genhtml_branch_coverage=1 00:15:28.795 --rc genhtml_function_coverage=1 00:15:28.795 --rc genhtml_legend=1 00:15:28.795 --rc geninfo_all_blocks=1 00:15:28.795 --rc geninfo_unexecuted_blocks=1 00:15:28.795 00:15:28.795 ' 00:15:28.795 12:27:35 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:28.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.795 --rc genhtml_branch_coverage=1 00:15:28.795 --rc genhtml_function_coverage=1 00:15:28.795 --rc genhtml_legend=1 00:15:28.795 --rc geninfo_all_blocks=1 00:15:28.795 --rc geninfo_unexecuted_blocks=1 00:15:28.795 00:15:28.795 ' 00:15:28.795 12:27:35 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:28.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.795 --rc genhtml_branch_coverage=1 00:15:28.795 --rc genhtml_function_coverage=1 00:15:28.795 --rc genhtml_legend=1 00:15:28.795 --rc geninfo_all_blocks=1 00:15:28.795 --rc geninfo_unexecuted_blocks=1 00:15:28.795 00:15:28.795 ' 00:15:28.795 12:27:35 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:28.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.795 --rc genhtml_branch_coverage=1 00:15:28.795 --rc genhtml_function_coverage=1 00:15:28.795 --rc genhtml_legend=1 00:15:28.795 --rc geninfo_all_blocks=1 00:15:28.795 --rc geninfo_unexecuted_blocks=1 00:15:28.795 00:15:28.795 ' 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73752 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73752 00:15:28.795 12:27:35 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73752 ']' 00:15:28.795 12:27:35 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.795 12:27:35 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:28.795 12:27:35 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:28.795 12:27:35 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.795 12:27:35 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:28.795 12:27:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:28.795 [2024-12-16 12:27:35.826990] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:28.796 [2024-12-16 12:27:35.827421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73752 ] 00:15:29.056 [2024-12-16 12:27:35.989140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.056 [2024-12-16 12:27:36.106391] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.000 12:27:36 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:30.000 12:27:36 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:30.000 12:27:36 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:30.000 12:27:36 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:30.000 12:27:36 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:30.000 12:27:36 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:30.000 12:27:36 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:30.262 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:30.835 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:30.835 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:30.835 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:30.835 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.835 12:27:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:30.835 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:30.835 nvme0n1 00:15:30.835 nvme0n2 00:15:30.835 nvme0n3 00:15:30.835 nvme1n1 00:15:30.835 nvme2n1 00:15:31.098 nvme3n1 00:15:31.098 12:27:37 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.098 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:31.098 12:27:37 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.098 12:27:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:31.098 12:27:37 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.098 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:31.098 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:31.098 12:27:37 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.098 12:27:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:31.098 12:27:37 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.098 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:31.098 12:27:37 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.098 12:27:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:31.098 12:27:37 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.098 12:27:38 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:31.098 12:27:38 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.098 12:27:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:31.098 12:27:38 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.098 12:27:38 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:31.098 12:27:38 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:31.098 12:27:38 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.098 12:27:38 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:31.098 12:27:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:31.098 12:27:38 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.098 12:27:38 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:31.098 12:27:38 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "3990db16-384d-4928-bd90-14ab51e44981"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3990db16-384d-4928-bd90-14ab51e44981",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "9c354346-d5dd-4016-b3c8-9b7a9d00158d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9c354346-d5dd-4016-b3c8-9b7a9d00158d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "61ec7094-232e-4314-9434-0f410ae34723"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "61ec7094-232e-4314-9434-0f410ae34723",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "07c5ac8c-0898-4e4e-b72e-73bf309e697c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "07c5ac8c-0898-4e4e-b72e-73bf309e697c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "414df24a-e526-4edc-9341-9f651a107c85"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "414df24a-e526-4edc-9341-9f651a107c85",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "717e42a5-e7a7-4e26-a44e-539a6c040ed7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "717e42a5-e7a7-4e26-a44e-539a6c040ed7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:31.098 12:27:38 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:31.098 12:27:38 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:31.098 12:27:38 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:31.098 12:27:38 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:31.098 12:27:38 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 73752 00:15:31.098 12:27:38 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73752 ']' 00:15:31.098 12:27:38 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73752 00:15:31.098 12:27:38 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:31.098 12:27:38 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.098 12:27:38 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73752 00:15:31.098 killing process with pid 73752 00:15:31.098 12:27:38 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:31.098 12:27:38 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:31.098 12:27:38 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73752' 00:15:31.098 12:27:38 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73752 00:15:31.098 12:27:38 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73752 00:15:33.012 12:27:39 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:33.012 12:27:39 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:33.012 12:27:39 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:33.012 12:27:39 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.012 12:27:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:33.012 ************************************ 00:15:33.012 START TEST bdev_hello_world 00:15:33.012 ************************************ 00:15:33.012 12:27:39 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:33.012 [2024-12-16 12:27:39.809200] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:33.012 [2024-12-16 12:27:39.809369] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74036 ] 00:15:33.012 [2024-12-16 12:27:39.975241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.012 [2024-12-16 12:27:40.115847] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.583 [2024-12-16 12:27:40.501179] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:33.583 [2024-12-16 12:27:40.501237] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:33.583 [2024-12-16 12:27:40.501255] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:33.583 [2024-12-16 12:27:40.503466] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:33.583 [2024-12-16 12:27:40.504131] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:33.583 [2024-12-16 12:27:40.504197] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:33.583 [2024-12-16 12:27:40.505141] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:33.583 00:15:33.583 [2024-12-16 12:27:40.505207] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:34.565 ************************************ 00:15:34.565 END TEST bdev_hello_world 00:15:34.565 ************************************ 00:15:34.565 00:15:34.565 real 0m1.545s 00:15:34.565 user 0m1.178s 00:15:34.565 sys 0m0.216s 00:15:34.565 12:27:41 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.565 12:27:41 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:34.565 12:27:41 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:34.565 12:27:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:34.565 12:27:41 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.565 12:27:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:34.565 ************************************ 00:15:34.565 START TEST bdev_bounds 00:15:34.565 ************************************ 00:15:34.565 12:27:41 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:34.565 12:27:41 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74068 00:15:34.565 Process bdevio pid: 74068 00:15:34.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.565 12:27:41 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:34.565 12:27:41 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74068' 00:15:34.565 12:27:41 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74068 00:15:34.565 12:27:41 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74068 ']' 00:15:34.565 12:27:41 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.565 12:27:41 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.565 12:27:41 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.565 12:27:41 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.565 12:27:41 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:34.565 12:27:41 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:34.565 [2024-12-16 12:27:41.444111] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:34.565 [2024-12-16 12:27:41.444306] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74068 ] 00:15:34.565 [2024-12-16 12:27:41.611121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:34.852 [2024-12-16 12:27:41.739306] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.852 [2024-12-16 12:27:41.739626] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.852 [2024-12-16 12:27:41.739649] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.424 12:27:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.424 12:27:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:35.424 12:27:42 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:35.424 I/O targets: 00:15:35.424 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:35.424 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:35.424 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:35.424 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:35.424 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:35.424 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:35.424 00:15:35.424 00:15:35.424 CUnit - A unit testing framework for C - Version 2.1-3 00:15:35.424 http://cunit.sourceforge.net/ 00:15:35.424 00:15:35.424 00:15:35.424 Suite: bdevio tests on: nvme3n1 00:15:35.424 Test: blockdev write read block ...passed 00:15:35.424 Test: blockdev write zeroes read block ...passed 00:15:35.424 Test: blockdev write zeroes read no split ...passed 00:15:35.424 Test: blockdev write zeroes read split ...passed 00:15:35.424 Test: blockdev write zeroes read split partial ...passed 00:15:35.424 Test: blockdev reset ...passed 00:15:35.424 Test: blockdev write read 8 blocks ...passed 00:15:35.424 Test: blockdev write read size > 128k ...passed 00:15:35.424 Test: blockdev write read invalid size ...passed 00:15:35.424 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:35.424 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:35.424 Test: blockdev write read max offset ...passed 00:15:35.424 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:35.424 Test: blockdev writev readv 8 blocks ...passed 00:15:35.424 Test: blockdev writev readv 30 x 1block ...passed 00:15:35.424 Test: blockdev writev readv block ...passed 00:15:35.425 Test: blockdev writev readv size > 128k ...passed 00:15:35.425 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:35.425 Test: blockdev comparev and writev ...passed 00:15:35.425 Test: blockdev nvme passthru rw ...passed 00:15:35.425 Test: blockdev nvme passthru vendor specific ...passed 00:15:35.425 Test: blockdev nvme admin passthru ...passed 00:15:35.425 Test: blockdev copy ...passed 00:15:35.425 Suite: bdevio tests on: nvme2n1 00:15:35.425 Test: blockdev write read block ...passed 00:15:35.425 Test: blockdev write zeroes read block ...passed 00:15:35.425 Test: blockdev write zeroes read no split ...passed 00:15:35.425 Test: blockdev write zeroes read split ...passed 00:15:35.425 Test: blockdev write zeroes read split partial ...passed 00:15:35.425 Test: blockdev reset ...passed 00:15:35.425 Test: blockdev write read 8 blocks ...passed 00:15:35.687 Test: blockdev write read size > 128k ...passed 00:15:35.687 Test: blockdev write read invalid size ...passed 00:15:35.687 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:35.687 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:35.687 Test: blockdev write read max offset ...passed 00:15:35.687 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:35.687 Test: blockdev writev readv 8 blocks ...passed 00:15:35.687 Test: blockdev writev readv 30 x 1block ...passed 00:15:35.687 Test: blockdev writev readv block ...passed 00:15:35.687 Test: blockdev writev readv size > 128k ...passed 00:15:35.687 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:35.687 Test: blockdev comparev and writev ...passed 00:15:35.687 Test: blockdev nvme passthru rw ...passed 00:15:35.687 Test: blockdev nvme passthru vendor specific ...passed 00:15:35.687 Test: blockdev nvme admin passthru ...passed 00:15:35.687 Test: blockdev copy ...passed 00:15:35.687 Suite: bdevio tests on: nvme1n1 00:15:35.687 Test: blockdev write read block ...passed 00:15:35.687 Test: blockdev write zeroes read block ...passed 00:15:35.687 Test: blockdev write zeroes read no split ...passed 00:15:35.687 Test: blockdev write zeroes read split ...passed 00:15:35.687 Test: blockdev write zeroes read split partial ...passed 00:15:35.687 Test: blockdev reset ...passed 00:15:35.687 Test: blockdev write read 8 blocks ...passed 00:15:35.687 Test: blockdev write read size > 128k ...passed 00:15:35.687 Test: blockdev write read invalid size ...passed 00:15:35.687 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:35.687 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:35.687 Test: blockdev write read max offset ...passed 00:15:35.687 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:35.687 Test: blockdev writev readv 8 blocks ...passed 00:15:35.687 Test: blockdev writev readv 30 x 1block ...passed 00:15:35.687 Test: blockdev writev readv block ...passed 00:15:35.687 Test: blockdev writev readv size > 128k ...passed 00:15:35.687 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:35.687 Test: blockdev comparev and writev ...passed 00:15:35.687 Test: blockdev nvme passthru rw ...passed 00:15:35.687 Test: blockdev nvme passthru vendor specific ...passed 00:15:35.687 Test: blockdev nvme admin passthru ...passed 00:15:35.687 Test: blockdev copy ...passed 00:15:35.687 Suite: bdevio tests on: nvme0n3 00:15:35.687 Test: blockdev write read block ...passed 00:15:35.687 Test: blockdev write zeroes read block ...passed 00:15:35.687 Test: blockdev write zeroes read no split ...passed 00:15:35.687 Test: blockdev write zeroes read split ...passed 00:15:35.687 Test: blockdev write zeroes read split partial ...passed 00:15:35.687 Test: blockdev reset ...passed 00:15:35.687 Test: blockdev write read 8 blocks ...passed 00:15:35.687 Test: blockdev write read size > 128k ...passed 00:15:35.687 Test: blockdev write read invalid size ...passed 00:15:35.687 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:35.687 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:35.687 Test: blockdev write read max offset ...passed 00:15:35.687 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:35.687 Test: blockdev writev readv 8 blocks ...passed 00:15:35.687 Test: blockdev writev readv 30 x 1block ...passed 00:15:35.687 Test: blockdev writev readv block ...passed 00:15:35.687 Test: blockdev writev readv size > 128k ...passed 00:15:35.687 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:35.687 Test: blockdev comparev and writev ...passed 00:15:35.687 Test: blockdev nvme passthru rw ...passed 00:15:35.687 Test: blockdev nvme passthru vendor specific ...passed 00:15:35.687 Test: blockdev nvme admin passthru ...passed 00:15:35.687 Test: blockdev copy ...passed 00:15:35.687 Suite: bdevio tests on: nvme0n2 00:15:35.687 Test: blockdev write read block ...passed 00:15:35.687 Test: blockdev write zeroes read block ...passed 00:15:35.687 Test: blockdev write zeroes read no split ...passed 00:15:35.687 Test: blockdev write zeroes read split ...passed 00:15:35.687 Test: blockdev write zeroes read split partial ...passed 00:15:35.687 Test: blockdev reset ...passed 00:15:35.687 Test: blockdev write read 8 blocks ...passed 00:15:35.687 Test: blockdev write read size > 128k ...passed 00:15:35.687 Test: blockdev write read invalid size ...passed 00:15:35.687 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:35.687 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:35.687 Test: blockdev write read max offset ...passed 00:15:35.687 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:35.687 Test: blockdev writev readv 8 blocks ...passed 00:15:35.687 Test: blockdev writev readv 30 x 1block ...passed 00:15:35.687 Test: blockdev writev readv block ...passed 00:15:35.687 Test: blockdev writev readv size > 128k ...passed 00:15:35.687 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:35.949 Test: blockdev comparev and writev ...passed 00:15:35.949 Test: blockdev nvme passthru rw ...passed 00:15:35.949 Test: blockdev nvme passthru vendor specific ...passed 00:15:35.949 Test: blockdev nvme admin passthru ...passed 00:15:35.949 Test: blockdev copy ...passed 00:15:35.949 Suite: bdevio tests on: nvme0n1 00:15:35.949 Test: blockdev write read block ...passed 00:15:35.949 Test: blockdev write zeroes read block ...passed 00:15:35.949 Test: blockdev write zeroes read no split ...passed 00:15:35.949 Test: blockdev write zeroes read split ...passed 00:15:35.949 Test: blockdev write zeroes read split partial ...passed 00:15:35.950 Test: blockdev reset ...passed 00:15:35.950 Test: blockdev write read 8 blocks ...passed 00:15:35.950 Test: blockdev write read size > 128k ...passed 00:15:35.950 Test: blockdev write read invalid size ...passed 00:15:35.950 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:35.950 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:35.950 Test: blockdev write read max offset ...passed 00:15:35.950 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:35.950 Test: blockdev writev readv 8 blocks ...passed 00:15:35.950 Test: blockdev writev readv 30 x 1block ...passed 00:15:35.950 Test: blockdev writev readv block ...passed 00:15:35.950 Test: blockdev writev readv size > 128k ...passed 00:15:35.950 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:35.950 Test: blockdev comparev and writev ...passed 00:15:35.950 Test: blockdev nvme passthru rw ...passed 00:15:35.950 Test: blockdev nvme passthru vendor specific ...passed 00:15:35.950 Test: blockdev nvme admin passthru ...passed 00:15:35.950 Test: blockdev copy ...passed 00:15:35.950 00:15:35.950 Run Summary: Type Total Ran Passed Failed Inactive 00:15:35.950 suites 6 6 n/a 0 0 00:15:35.950 tests 138 138 138 0 0 00:15:35.950 asserts 780 780 780 0 n/a 00:15:35.950 00:15:35.950 Elapsed time = 1.279 seconds 00:15:35.950 0 00:15:35.950 12:27:42 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74068 00:15:35.950 12:27:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74068 ']' 00:15:35.950 12:27:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74068 00:15:35.950 12:27:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:35.950 12:27:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:35.950 12:27:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74068 00:15:35.950 killing process with pid 74068 00:15:35.950 12:27:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:35.950 12:27:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:35.950 12:27:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74068' 00:15:35.950 12:27:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74068 00:15:35.950 12:27:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74068 00:15:36.893 12:27:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:36.893 00:15:36.893 real 0m2.337s 00:15:36.893 user 0m5.670s 00:15:36.893 sys 0m0.348s 00:15:36.893 12:27:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.893 ************************************ 00:15:36.893 END TEST bdev_bounds 00:15:36.893 ************************************ 00:15:36.893 12:27:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:36.893 12:27:43 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:36.893 12:27:43 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:36.893 12:27:43 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:36.893 12:27:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.893 ************************************ 00:15:36.893 START TEST bdev_nbd 00:15:36.893 ************************************ 00:15:36.893 12:27:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:36.893 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:36.894 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:36.894 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:36.894 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:36.894 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:36.894 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:36.894 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:36.894 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:36.894 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:36.894 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:36.894 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:36.894 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:36.894 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:36.894 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:36.894 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:36.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:36.894 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74128 00:15:36.894 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:36.894 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74128 /var/tmp/spdk-nbd.sock 00:15:36.894 12:27:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74128 ']' 00:15:36.894 12:27:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:36.894 12:27:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.894 12:27:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:36.894 12:27:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.894 12:27:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:36.894 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:36.894 [2024-12-16 12:27:43.806634] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:36.894 [2024-12-16 12:27:43.806746] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.894 [2024-12-16 12:27:43.967716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.155 [2024-12-16 12:27:44.069780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.727 12:27:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.728 12:27:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:37.728 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:37.728 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:37.728 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:37.728 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:37.728 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:37.728 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:37.728 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:37.728 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:37.728 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:37.728 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:37.728 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:37.728 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:37.728 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:37.990 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:37.990 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:37.990 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:37.990 12:27:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:37.990 12:27:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:37.990 12:27:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.990 12:27:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.990 12:27:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:37.990 12:27:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:37.990 12:27:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.990 12:27:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.990 12:27:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.990 1+0 records in 00:15:37.990 1+0 records out 00:15:37.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134051 s, 3.1 MB/s 00:15:37.990 12:27:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.990 12:27:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:37.990 12:27:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.990 12:27:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:37.990 12:27:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:37.990 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:37.990 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:37.990 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:37.990 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:37.990 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:37.990 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:37.990 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:37.990 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:37.990 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.990 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.990 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:37.990 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:37.990 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.990 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.990 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.990 1+0 records in 00:15:37.990 1+0 records out 00:15:37.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111753 s, 3.7 MB/s 00:15:37.990 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.990 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:37.990 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.990 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:37.990 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:37.990 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:37.990 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:37.990 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:38.250 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:38.250 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:38.250 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:38.250 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:38.250 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:38.250 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:38.250 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:38.250 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:38.250 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:38.250 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:38.250 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:38.250 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:38.250 1+0 records in 00:15:38.250 1+0 records out 00:15:38.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000854045 s, 4.8 MB/s 00:15:38.251 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.251 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:38.251 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.251 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:38.251 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:38.251 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:38.251 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:38.251 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:38.511 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:38.511 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:38.511 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:38.511 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:38.511 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:38.511 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:38.511 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:38.511 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:38.511 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:38.511 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:38.511 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:38.511 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:38.511 1+0 records in 00:15:38.511 1+0 records out 00:15:38.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000895328 s, 4.6 MB/s 00:15:38.511 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.511 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:38.511 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.511 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:38.511 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:38.511 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:38.511 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:38.511 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:38.772 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:38.772 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:38.772 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:38.772 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:38.772 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:38.772 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:38.772 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:38.772 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:38.772 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:38.772 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:38.772 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:38.772 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:38.772 1+0 records in 00:15:38.772 1+0 records out 00:15:38.772 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00430685 s, 951 kB/s 00:15:38.772 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.772 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:38.772 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.772 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:38.772 12:27:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:38.772 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:38.772 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:38.772 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:39.034 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:39.034 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:39.034 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:39.034 12:27:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:39.034 12:27:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:39.034 12:27:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:39.034 12:27:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:39.034 12:27:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:39.034 12:27:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:39.034 12:27:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:39.034 12:27:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:39.034 12:27:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:39.034 1+0 records in 00:15:39.034 1+0 records out 00:15:39.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00149919 s, 2.7 MB/s 00:15:39.034 12:27:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.034 12:27:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:39.034 12:27:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.034 12:27:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:39.034 12:27:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:39.034 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:39.034 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:39.034 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:39.296 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:39.296 { 00:15:39.296 "nbd_device": "/dev/nbd0", 00:15:39.296 "bdev_name": "nvme0n1" 00:15:39.296 }, 00:15:39.296 { 00:15:39.296 "nbd_device": "/dev/nbd1", 00:15:39.296 "bdev_name": "nvme0n2" 00:15:39.296 }, 00:15:39.296 { 00:15:39.296 "nbd_device": "/dev/nbd2", 00:15:39.296 "bdev_name": "nvme0n3" 00:15:39.296 }, 00:15:39.296 { 00:15:39.296 "nbd_device": "/dev/nbd3", 00:15:39.296 "bdev_name": "nvme1n1" 00:15:39.296 }, 00:15:39.296 { 00:15:39.296 "nbd_device": "/dev/nbd4", 00:15:39.296 "bdev_name": "nvme2n1" 00:15:39.296 }, 00:15:39.296 { 00:15:39.296 "nbd_device": "/dev/nbd5", 00:15:39.296 "bdev_name": "nvme3n1" 00:15:39.296 } 00:15:39.296 ]' 00:15:39.296 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:39.296 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:39.296 { 00:15:39.296 "nbd_device": "/dev/nbd0", 00:15:39.296 "bdev_name": "nvme0n1" 00:15:39.296 }, 00:15:39.296 { 00:15:39.296 "nbd_device": "/dev/nbd1", 00:15:39.296 "bdev_name": "nvme0n2" 00:15:39.296 }, 00:15:39.296 { 00:15:39.296 "nbd_device": "/dev/nbd2", 00:15:39.296 "bdev_name": "nvme0n3" 00:15:39.296 }, 00:15:39.296 { 00:15:39.296 "nbd_device": "/dev/nbd3", 00:15:39.296 "bdev_name": "nvme1n1" 00:15:39.296 }, 00:15:39.296 { 00:15:39.296 "nbd_device": "/dev/nbd4", 00:15:39.296 "bdev_name": "nvme2n1" 00:15:39.296 }, 00:15:39.296 { 00:15:39.296 "nbd_device": "/dev/nbd5", 00:15:39.296 "bdev_name": "nvme3n1" 00:15:39.296 } 00:15:39.296 ]' 00:15:39.296 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:39.296 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:39.296 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:39.296 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:39.296 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:39.296 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:39.296 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.296 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:39.558 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:39.558 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:39.558 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:39.558 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.558 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.558 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:39.558 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:39.558 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.558 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.558 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:39.819 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:39.819 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:39.819 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:39.819 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.819 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.819 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:39.819 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:39.819 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.819 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.819 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:40.080 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:40.080 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:40.080 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:40.080 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.080 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.080 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:40.080 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:40.080 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.080 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.080 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:40.342 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:40.342 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:40.342 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:40.342 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.342 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.342 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:40.342 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:40.342 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.342 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.342 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:40.605 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:40.605 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:40.605 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:40.605 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.605 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.605 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:40.605 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:40.605 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.605 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.605 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:40.605 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:40.605 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:40.605 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:40.605 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.605 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.605 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:40.605 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:40.605 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.605 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:40.605 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:40.605 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:40.866 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:40.866 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:40.866 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:40.866 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:40.866 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:40.866 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:40.866 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:40.866 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:40.866 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:40.866 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:40.866 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:40.866 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:40.867 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:40.867 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:40.867 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:40.867 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:40.867 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:40.867 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:40.867 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:40.867 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:40.867 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:40.867 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:40.867 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:40.867 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:40.867 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:40.867 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:40.867 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:40.867 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:41.128 /dev/nbd0 00:15:41.129 12:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:41.129 12:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:41.129 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:41.129 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:41.129 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:41.129 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:41.129 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:41.129 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:41.129 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:41.129 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:41.129 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.129 1+0 records in 00:15:41.129 1+0 records out 00:15:41.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104972 s, 3.9 MB/s 00:15:41.129 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.129 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:41.129 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.129 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:41.129 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:41.129 12:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.129 12:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:41.129 12:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:41.390 /dev/nbd1 00:15:41.390 12:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:41.390 12:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:41.390 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:41.390 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:41.390 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:41.390 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:41.390 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:41.390 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:41.390 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:41.390 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:41.390 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.390 1+0 records in 00:15:41.390 1+0 records out 00:15:41.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010113 s, 4.1 MB/s 00:15:41.390 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.390 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:41.390 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.390 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:41.390 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:41.390 12:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.390 12:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:41.390 12:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:41.652 /dev/nbd10 00:15:41.652 12:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:41.652 12:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:41.652 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:41.652 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:41.652 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:41.652 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:41.652 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:41.652 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:41.652 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:41.652 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:41.652 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.652 1+0 records in 00:15:41.652 1+0 records out 00:15:41.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110789 s, 3.7 MB/s 00:15:41.652 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.652 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:41.652 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.652 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:41.652 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:41.652 12:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.652 12:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:41.652 12:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:41.914 /dev/nbd11 00:15:41.914 12:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:41.914 12:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:41.914 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:41.914 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:41.914 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:41.914 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:41.914 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:41.914 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:41.914 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:41.914 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:41.914 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.914 1+0 records in 00:15:41.914 1+0 records out 00:15:41.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00125434 s, 3.3 MB/s 00:15:41.914 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.914 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:41.914 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.914 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:41.914 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:41.914 12:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.914 12:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:41.914 12:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:42.179 /dev/nbd12 00:15:42.179 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:42.179 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:42.179 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:42.179 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:42.179 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:42.179 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:42.179 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:42.179 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:42.179 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:42.179 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:42.179 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:42.179 1+0 records in 00:15:42.179 1+0 records out 00:15:42.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00137566 s, 3.0 MB/s 00:15:42.179 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.179 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:42.179 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.179 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:42.179 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:42.179 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:42.179 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:42.179 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:42.442 /dev/nbd13 00:15:42.442 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:42.442 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:42.442 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:42.442 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:42.442 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:42.442 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:42.442 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:42.442 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:42.442 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:42.442 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:42.443 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:42.443 1+0 records in 00:15:42.443 1+0 records out 00:15:42.443 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114637 s, 3.6 MB/s 00:15:42.443 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.443 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:42.443 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.443 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:42.443 12:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:42.443 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:42.443 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:42.443 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:42.443 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:42.443 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:42.704 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:42.704 { 00:15:42.704 "nbd_device": "/dev/nbd0", 00:15:42.704 "bdev_name": "nvme0n1" 00:15:42.704 }, 00:15:42.704 { 00:15:42.704 "nbd_device": "/dev/nbd1", 00:15:42.704 "bdev_name": "nvme0n2" 00:15:42.704 }, 00:15:42.704 { 00:15:42.704 "nbd_device": "/dev/nbd10", 00:15:42.704 "bdev_name": "nvme0n3" 00:15:42.704 }, 00:15:42.704 { 00:15:42.704 "nbd_device": "/dev/nbd11", 00:15:42.704 "bdev_name": "nvme1n1" 00:15:42.704 }, 00:15:42.704 { 00:15:42.704 "nbd_device": "/dev/nbd12", 00:15:42.704 "bdev_name": "nvme2n1" 00:15:42.704 }, 00:15:42.704 { 00:15:42.704 "nbd_device": "/dev/nbd13", 00:15:42.704 "bdev_name": "nvme3n1" 00:15:42.704 } 00:15:42.704 ]' 00:15:42.704 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:42.704 { 00:15:42.704 "nbd_device": "/dev/nbd0", 00:15:42.704 "bdev_name": "nvme0n1" 00:15:42.704 }, 00:15:42.704 { 00:15:42.704 "nbd_device": "/dev/nbd1", 00:15:42.704 "bdev_name": "nvme0n2" 00:15:42.704 }, 00:15:42.704 { 00:15:42.704 "nbd_device": "/dev/nbd10", 00:15:42.704 "bdev_name": "nvme0n3" 00:15:42.704 }, 00:15:42.704 { 00:15:42.704 "nbd_device": "/dev/nbd11", 00:15:42.704 "bdev_name": "nvme1n1" 00:15:42.704 }, 00:15:42.704 { 00:15:42.704 "nbd_device": "/dev/nbd12", 00:15:42.704 "bdev_name": "nvme2n1" 00:15:42.704 }, 00:15:42.704 { 00:15:42.704 "nbd_device": "/dev/nbd13", 00:15:42.704 "bdev_name": "nvme3n1" 00:15:42.704 } 00:15:42.704 ]' 00:15:42.705 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:42.705 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:42.705 /dev/nbd1 00:15:42.705 /dev/nbd10 00:15:42.705 /dev/nbd11 00:15:42.705 /dev/nbd12 00:15:42.705 /dev/nbd13' 00:15:42.705 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:42.705 /dev/nbd1 00:15:42.705 /dev/nbd10 00:15:42.705 /dev/nbd11 00:15:42.705 /dev/nbd12 00:15:42.705 /dev/nbd13' 00:15:42.705 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:42.705 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:42.705 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:42.705 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:42.705 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:42.705 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:42.705 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:42.705 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:42.705 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:42.705 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:42.705 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:42.705 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:42.705 256+0 records in 00:15:42.705 256+0 records out 00:15:42.705 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00716223 s, 146 MB/s 00:15:42.705 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:42.705 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:42.966 256+0 records in 00:15:42.966 256+0 records out 00:15:42.966 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.207252 s, 5.1 MB/s 00:15:42.966 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:42.966 12:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:43.228 256+0 records in 00:15:43.228 256+0 records out 00:15:43.228 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.243423 s, 4.3 MB/s 00:15:43.228 12:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:43.228 12:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:43.489 256+0 records in 00:15:43.489 256+0 records out 00:15:43.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240484 s, 4.4 MB/s 00:15:43.489 12:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:43.489 12:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:43.751 256+0 records in 00:15:43.751 256+0 records out 00:15:43.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.244612 s, 4.3 MB/s 00:15:43.751 12:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:43.751 12:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:44.013 256+0 records in 00:15:44.013 256+0 records out 00:15:44.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.302189 s, 3.5 MB/s 00:15:44.013 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:44.013 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:44.274 256+0 records in 00:15:44.274 256+0 records out 00:15:44.274 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240249 s, 4.4 MB/s 00:15:44.274 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:44.274 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:44.274 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:44.274 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:44.274 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:44.274 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:44.274 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:44.274 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:44.274 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:44.274 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:44.274 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:44.274 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:44.274 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:44.274 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:44.274 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:44.275 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:44.275 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:44.275 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:44.275 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:44.275 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:44.275 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:44.275 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:44.275 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:44.275 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:44.275 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:44.275 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:44.275 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:44.535 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:44.535 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:44.535 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:44.535 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:44.535 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:44.535 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:44.535 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:44.535 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:44.535 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:44.535 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:44.797 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:44.797 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:44.797 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:44.797 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:44.797 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:44.797 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:44.797 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:44.797 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:44.797 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:44.797 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:45.057 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:45.057 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:45.057 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:45.057 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.057 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.057 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:45.057 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:45.057 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.057 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.057 12:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:45.316 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:45.316 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:45.316 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:45.316 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.316 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.316 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:45.316 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:45.316 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.316 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.316 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:45.574 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:45.574 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:45.574 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:45.574 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.574 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.574 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:45.574 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:45.574 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.574 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.574 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:45.574 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:45.574 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:45.574 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:45.574 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.574 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.574 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:45.574 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:45.574 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.574 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:45.574 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:45.574 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:45.832 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:45.832 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:45.832 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:45.832 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:45.832 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:45.832 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:45.832 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:45.832 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:45.832 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:45.832 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:45.832 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:45.832 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:45.832 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:45.832 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:45.832 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:45.832 12:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:46.091 malloc_lvol_verify 00:15:46.091 12:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:46.349 ef8f735c-8a3a-4a2d-95ae-14d28e7cc571 00:15:46.349 12:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:46.607 cd030f81-7d38-4245-b9d6-2529cbd2dbf5 00:15:46.607 12:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:46.607 /dev/nbd0 00:15:46.865 12:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:46.865 12:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:46.865 12:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:46.865 12:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:46.865 12:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:46.865 mke2fs 1.47.0 (5-Feb-2023) 00:15:46.865 Discarding device blocks: 0/4096 done 00:15:46.865 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:46.865 00:15:46.866 Allocating group tables: 0/1 done 00:15:46.866 Writing inode tables: 0/1 done 00:15:46.866 Creating journal (1024 blocks): done 00:15:46.866 Writing superblocks and filesystem accounting information: 0/1 done 00:15:46.866 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74128 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74128 ']' 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74128 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74128 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74128' 00:15:46.866 killing process with pid 74128 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74128 00:15:46.866 12:27:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74128 00:15:47.802 12:27:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:47.802 00:15:47.802 real 0m10.837s 00:15:47.802 user 0m14.504s 00:15:47.802 sys 0m3.818s 00:15:47.802 ************************************ 00:15:47.802 END TEST bdev_nbd 00:15:47.802 ************************************ 00:15:47.802 12:27:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.802 12:27:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:47.802 12:27:54 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:47.802 12:27:54 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:47.802 12:27:54 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:47.802 12:27:54 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:47.802 12:27:54 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:47.802 12:27:54 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.802 12:27:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.802 ************************************ 00:15:47.802 START TEST bdev_fio 00:15:47.802 ************************************ 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:47.802 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:47.802 ************************************ 00:15:47.802 START TEST bdev_fio_rw_verify 00:15:47.802 ************************************ 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:47.802 12:27:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:47.802 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:47.802 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:47.802 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:47.802 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:47.802 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:47.802 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:47.802 fio-3.35 00:15:47.802 Starting 6 threads 00:16:00.034 00:16:00.034 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74536: Mon Dec 16 12:28:05 2024 00:16:00.034 read: IOPS=18.0k, BW=70.2MiB/s (73.6MB/s)(702MiB/10004msec) 00:16:00.034 slat (usec): min=2, max=1496, avg= 6.41, stdev=14.86 00:16:00.034 clat (usec): min=83, max=8737, avg=1055.43, stdev=754.42 00:16:00.034 lat (usec): min=86, max=8752, avg=1061.84, stdev=755.29 00:16:00.034 clat percentiles (usec): 00:16:00.034 | 50.000th=[ 881], 99.000th=[ 3425], 99.900th=[ 4686], 99.990th=[ 6783], 00:16:00.034 | 99.999th=[ 8717] 00:16:00.034 write: IOPS=18.3k, BW=71.5MiB/s (75.0MB/s)(715MiB/10004msec); 0 zone resets 00:16:00.034 slat (usec): min=6, max=7851, avg=37.99, stdev=126.59 00:16:00.034 clat (usec): min=69, max=8882, avg=1278.76, stdev=838.26 00:16:00.034 lat (usec): min=82, max=9981, avg=1316.75, stdev=852.33 00:16:00.034 clat percentiles (usec): 00:16:00.034 | 50.000th=[ 1106], 99.000th=[ 3851], 99.900th=[ 5145], 99.990th=[ 7111], 00:16:00.034 | 99.999th=[ 8848] 00:16:00.034 bw ( KiB/s): min=49058, max=140297, per=100.00%, avg=74342.00, stdev=4542.29, samples=114 00:16:00.034 iops : min=12260, max=35072, avg=18584.05, stdev=1135.54, samples=114 00:16:00.034 lat (usec) : 100=0.03%, 250=6.60%, 500=15.87%, 750=15.33%, 1000=12.21% 00:16:00.034 lat (msec) : 2=35.37%, 4=14.02%, 10=0.58% 00:16:00.034 cpu : usr=42.17%, sys=33.12%, ctx=6230, majf=0, minf=17120 00:16:00.034 IO depths : 1=11.4%, 2=23.8%, 4=51.1%, 8=13.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:00.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.034 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.034 issued rwts: total=179678,183147,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.034 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:00.034 00:16:00.034 Run status group 0 (all jobs): 00:16:00.034 READ: bw=70.2MiB/s (73.6MB/s), 70.2MiB/s-70.2MiB/s (73.6MB/s-73.6MB/s), io=702MiB (736MB), run=10004-10004msec 00:16:00.034 WRITE: bw=71.5MiB/s (75.0MB/s), 71.5MiB/s-71.5MiB/s (75.0MB/s-75.0MB/s), io=715MiB (750MB), run=10004-10004msec 00:16:00.034 ----------------------------------------------------- 00:16:00.034 Suppressions used: 00:16:00.034 count bytes template 00:16:00.034 6 48 /usr/src/fio/parse.c 00:16:00.034 3350 321600 /usr/src/fio/iolog.c 00:16:00.034 1 8 libtcmalloc_minimal.so 00:16:00.034 1 904 libcrypto.so 00:16:00.034 ----------------------------------------------------- 00:16:00.034 00:16:00.034 00:16:00.034 real 0m11.912s 00:16:00.034 user 0m26.793s 00:16:00.034 sys 0m20.195s 00:16:00.034 12:28:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:00.034 12:28:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:00.034 ************************************ 00:16:00.034 END TEST bdev_fio_rw_verify 00:16:00.034 ************************************ 00:16:00.034 12:28:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:00.034 12:28:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:00.034 12:28:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:00.034 12:28:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:00.034 12:28:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:00.034 12:28:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:00.034 12:28:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:00.034 12:28:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:00.034 12:28:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:00.034 12:28:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:00.034 12:28:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:00.034 12:28:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:00.034 12:28:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:00.034 12:28:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:00.034 12:28:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:00.034 12:28:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:00.034 12:28:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:00.035 12:28:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "3990db16-384d-4928-bd90-14ab51e44981"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3990db16-384d-4928-bd90-14ab51e44981",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "9c354346-d5dd-4016-b3c8-9b7a9d00158d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9c354346-d5dd-4016-b3c8-9b7a9d00158d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "61ec7094-232e-4314-9434-0f410ae34723"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "61ec7094-232e-4314-9434-0f410ae34723",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "07c5ac8c-0898-4e4e-b72e-73bf309e697c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "07c5ac8c-0898-4e4e-b72e-73bf309e697c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "414df24a-e526-4edc-9341-9f651a107c85"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "414df24a-e526-4edc-9341-9f651a107c85",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "717e42a5-e7a7-4e26-a44e-539a6c040ed7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "717e42a5-e7a7-4e26-a44e-539a6c040ed7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:00.035 12:28:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:00.035 12:28:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:00.035 /home/vagrant/spdk_repo/spdk 00:16:00.035 12:28:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:00.035 12:28:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:00.035 12:28:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:00.035 00:16:00.035 real 0m12.090s 00:16:00.035 user 0m26.869s 00:16:00.035 sys 0m20.276s 00:16:00.035 12:28:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:00.035 12:28:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:00.035 ************************************ 00:16:00.035 END TEST bdev_fio 00:16:00.035 ************************************ 00:16:00.035 12:28:06 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:00.035 12:28:06 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:00.035 12:28:06 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:00.035 12:28:06 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:00.035 12:28:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:00.035 ************************************ 00:16:00.035 START TEST bdev_verify 00:16:00.035 ************************************ 00:16:00.035 12:28:06 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:00.035 [2024-12-16 12:28:06.872553] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:00.035 [2024-12-16 12:28:06.872698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74709 ] 00:16:00.035 [2024-12-16 12:28:07.037521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:00.295 [2024-12-16 12:28:07.184324] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.295 [2024-12-16 12:28:07.184461] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.868 Running I/O for 5 seconds... 00:16:02.753 23808.00 IOPS, 93.00 MiB/s [2024-12-16T12:28:11.244Z] 23936.00 IOPS, 93.50 MiB/s [2024-12-16T12:28:12.187Z] 24106.67 IOPS, 94.17 MiB/s [2024-12-16T12:28:13.130Z] 23992.00 IOPS, 93.72 MiB/s [2024-12-16T12:28:13.130Z] 23936.00 IOPS, 93.50 MiB/s 00:16:06.024 Latency(us) 00:16:06.024 [2024-12-16T12:28:13.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.024 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:06.024 Verification LBA range: start 0x0 length 0x80000 00:16:06.024 nvme0n1 : 5.04 1904.93 7.44 0.00 0.00 67078.06 10435.35 67754.14 00:16:06.024 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:06.024 Verification LBA range: start 0x80000 length 0x80000 00:16:06.024 nvme0n1 : 5.06 1920.81 7.50 0.00 0.00 66502.66 4032.98 66140.95 00:16:06.024 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:06.024 Verification LBA range: start 0x0 length 0x80000 00:16:06.024 nvme0n2 : 5.05 1900.33 7.42 0.00 0.00 67129.92 14216.27 63721.16 00:16:06.024 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:06.024 Verification LBA range: start 0x80000 length 0x80000 00:16:06.024 nvme0n2 : 5.07 1893.53 7.40 0.00 0.00 67308.06 6805.66 64124.46 00:16:06.024 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:06.024 Verification LBA range: start 0x0 length 0x80000 00:16:06.024 nvme0n3 : 5.04 1904.09 7.44 0.00 0.00 66881.06 15022.87 61301.37 00:16:06.024 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:06.024 Verification LBA range: start 0x80000 length 0x80000 00:16:06.024 nvme0n3 : 5.05 1900.95 7.43 0.00 0.00 66888.28 7813.91 62107.96 00:16:06.024 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:06.025 Verification LBA range: start 0x0 length 0x20000 00:16:06.025 nvme1n1 : 5.06 1895.75 7.41 0.00 0.00 67068.14 11191.53 63721.16 00:16:06.025 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:06.025 Verification LBA range: start 0x20000 length 0x20000 00:16:06.025 nvme1n1 : 5.05 1900.20 7.42 0.00 0.00 66753.49 10687.41 68157.44 00:16:06.025 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:06.025 Verification LBA range: start 0x0 length 0xbd0bd 00:16:06.025 nvme2n1 : 5.08 2556.99 9.99 0.00 0.00 49626.57 5520.15 54041.99 00:16:06.025 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:06.025 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:06.025 nvme2n1 : 5.08 2505.95 9.79 0.00 0.00 50477.00 4184.22 62914.56 00:16:06.025 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:06.025 Verification LBA range: start 0x0 length 0xa0000 00:16:06.025 nvme3n1 : 5.08 1841.01 7.19 0.00 0.00 68837.31 4789.17 79449.80 00:16:06.025 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:06.025 Verification LBA range: start 0xa0000 length 0xa0000 00:16:06.025 nvme3n1 : 5.09 1609.94 6.29 0.00 0.00 78502.92 6654.42 98808.12 00:16:06.025 [2024-12-16T12:28:13.131Z] =================================================================================================================== 00:16:06.025 [2024-12-16T12:28:13.131Z] Total : 23734.48 92.71 0.00 0.00 64266.34 4032.98 98808.12 00:16:06.968 00:16:06.968 real 0m6.924s 00:16:06.968 user 0m10.953s 00:16:06.968 sys 0m1.694s 00:16:06.968 12:28:13 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.968 ************************************ 00:16:06.968 12:28:13 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:06.968 END TEST bdev_verify 00:16:06.968 ************************************ 00:16:06.968 12:28:13 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:06.968 12:28:13 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:06.968 12:28:13 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.968 12:28:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:06.968 ************************************ 00:16:06.968 START TEST bdev_verify_big_io 00:16:06.968 ************************************ 00:16:06.968 12:28:13 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:06.968 [2024-12-16 12:28:13.876998] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:06.968 [2024-12-16 12:28:13.877181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74808 ] 00:16:06.968 [2024-12-16 12:28:14.045748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:07.228 [2024-12-16 12:28:14.190007] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.228 [2024-12-16 12:28:14.190107] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.802 Running I/O for 5 seconds... 00:16:13.727 1000.00 IOPS, 62.50 MiB/s [2024-12-16T12:28:21.404Z] 2480.00 IOPS, 155.00 MiB/s 00:16:14.298 Latency(us) 00:16:14.298 [2024-12-16T12:28:21.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.298 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:14.298 Verification LBA range: start 0x0 length 0x8000 00:16:14.298 nvme0n1 : 5.88 106.05 6.63 0.00 0.00 1176896.81 34683.67 1897115.96 00:16:14.298 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:14.298 Verification LBA range: start 0x8000 length 0x8000 00:16:14.298 nvme0n1 : 5.96 85.86 5.37 0.00 0.00 1442646.35 25710.28 1755154.90 00:16:14.298 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:14.298 Verification LBA range: start 0x0 length 0x8000 00:16:14.298 nvme0n2 : 5.87 138.92 8.68 0.00 0.00 874177.63 6427.57 967916.31 00:16:14.298 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:14.298 Verification LBA range: start 0x8000 length 0x8000 00:16:14.298 nvme0n2 : 5.99 93.41 5.84 0.00 0.00 1245880.41 30045.74 1555118.87 00:16:14.298 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:14.298 Verification LBA range: start 0x0 length 0x8000 00:16:14.298 nvme0n3 : 5.89 119.61 7.48 0.00 0.00 985947.30 71383.83 1729343.80 00:16:14.298 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:14.298 Verification LBA range: start 0x8000 length 0x8000 00:16:14.298 nvme0n3 : 6.00 85.37 5.34 0.00 0.00 1291256.91 46177.67 1077613.49 00:16:14.298 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:14.298 Verification LBA range: start 0x0 length 0x2000 00:16:14.299 nvme1n1 : 5.89 136.07 8.50 0.00 0.00 838017.70 70980.53 1542213.32 00:16:14.299 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:14.299 Verification LBA range: start 0x2000 length 0x2000 00:16:14.299 nvme1n1 : 6.10 116.73 7.30 0.00 0.00 896322.76 29642.44 1045349.61 00:16:14.299 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:14.299 Verification LBA range: start 0x0 length 0xbd0b 00:16:14.299 nvme2n1 : 5.88 141.48 8.84 0.00 0.00 786407.58 8267.62 1806777.11 00:16:14.299 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:14.299 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:14.299 nvme2n1 : 6.29 170.54 10.66 0.00 0.00 590210.17 2432.39 2464960.20 00:16:14.299 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:14.299 Verification LBA range: start 0x0 length 0xa000 00:16:14.299 nvme3n1 : 5.89 152.15 9.51 0.00 0.00 706445.42 3881.75 1013085.74 00:16:14.299 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:14.299 Verification LBA range: start 0xa000 length 0xa000 00:16:14.299 nvme3n1 : 6.45 208.23 13.01 0.00 0.00 461244.38 567.14 3484498.71 00:16:14.299 [2024-12-16T12:28:21.405Z] =================================================================================================================== 00:16:14.299 [2024-12-16T12:28:21.405Z] Total : 1554.43 97.15 0.00 0.00 861863.49 567.14 3484498.71 00:16:15.239 00:16:15.239 real 0m8.206s 00:16:15.239 user 0m15.055s 00:16:15.239 sys 0m0.500s 00:16:15.239 12:28:22 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.239 12:28:22 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.239 ************************************ 00:16:15.239 END TEST bdev_verify_big_io 00:16:15.239 ************************************ 00:16:15.239 12:28:22 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:15.239 12:28:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:15.239 12:28:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.239 12:28:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:15.239 ************************************ 00:16:15.239 START TEST bdev_write_zeroes 00:16:15.239 ************************************ 00:16:15.239 12:28:22 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:15.239 [2024-12-16 12:28:22.131045] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:15.239 [2024-12-16 12:28:22.131373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74918 ] 00:16:15.239 [2024-12-16 12:28:22.288211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.499 [2024-12-16 12:28:22.391733] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.760 Running I/O for 1 seconds... 00:16:16.702 89408.00 IOPS, 349.25 MiB/s 00:16:16.702 Latency(us) 00:16:16.702 [2024-12-16T12:28:23.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.702 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:16.702 nvme0n1 : 1.02 14707.90 57.45 0.00 0.00 8694.45 5242.88 16434.41 00:16:16.702 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:16.702 nvme0n2 : 1.01 14651.25 57.23 0.00 0.00 8722.49 5444.53 16434.41 00:16:16.702 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:16.703 nvme0n3 : 1.01 14633.20 57.16 0.00 0.00 8727.76 5469.74 16636.06 00:16:16.703 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:16.703 nvme1n1 : 1.02 14616.81 57.10 0.00 0.00 8732.07 5469.74 16736.89 00:16:16.703 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:16.703 nvme2n1 : 1.02 15722.66 61.42 0.00 0.00 8112.47 2974.33 14417.92 00:16:16.703 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:16.703 nvme3n1 : 1.02 14598.71 57.03 0.00 0.00 8707.37 5419.32 16938.54 00:16:16.703 [2024-12-16T12:28:23.809Z] =================================================================================================================== 00:16:16.703 [2024-12-16T12:28:23.809Z] Total : 88930.53 347.38 0.00 0.00 8609.44 2974.33 16938.54 00:16:17.274 00:16:17.274 real 0m2.295s 00:16:17.274 user 0m1.669s 00:16:17.274 sys 0m0.474s 00:16:17.274 12:28:24 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.274 12:28:24 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:17.274 ************************************ 00:16:17.274 END TEST bdev_write_zeroes 00:16:17.274 ************************************ 00:16:17.535 12:28:24 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:17.535 12:28:24 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:17.535 12:28:24 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.535 12:28:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:17.535 ************************************ 00:16:17.535 START TEST bdev_json_nonenclosed 00:16:17.535 ************************************ 00:16:17.535 12:28:24 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:17.535 [2024-12-16 12:28:24.473953] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:17.535 [2024-12-16 12:28:24.474064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74964 ] 00:16:17.535 [2024-12-16 12:28:24.631258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.795 [2024-12-16 12:28:24.722305] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.795 [2024-12-16 12:28:24.722375] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:17.795 [2024-12-16 12:28:24.722390] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:17.795 [2024-12-16 12:28:24.722398] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:17.795 ************************************ 00:16:17.795 END TEST bdev_json_nonenclosed 00:16:17.795 ************************************ 00:16:17.795 00:16:17.795 real 0m0.457s 00:16:17.795 user 0m0.263s 00:16:17.795 sys 0m0.091s 00:16:17.795 12:28:24 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.795 12:28:24 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:18.056 12:28:24 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:18.056 12:28:24 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:18.056 12:28:24 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.056 12:28:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:18.056 ************************************ 00:16:18.056 START TEST bdev_json_nonarray 00:16:18.056 ************************************ 00:16:18.056 12:28:24 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:18.056 [2024-12-16 12:28:24.986556] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:18.056 [2024-12-16 12:28:24.986664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74991 ] 00:16:18.056 [2024-12-16 12:28:25.140985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.317 [2024-12-16 12:28:25.233788] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.317 [2024-12-16 12:28:25.233878] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:18.317 [2024-12-16 12:28:25.233895] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:18.317 [2024-12-16 12:28:25.233903] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:18.317 00:16:18.317 real 0m0.465s 00:16:18.317 user 0m0.269s 00:16:18.317 sys 0m0.091s 00:16:18.317 12:28:25 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:18.317 ************************************ 00:16:18.317 END TEST bdev_json_nonarray 00:16:18.317 ************************************ 00:16:18.317 12:28:25 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:18.577 12:28:25 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:16:18.577 12:28:25 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:16:18.577 12:28:25 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:16:18.577 12:28:25 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:18.577 12:28:25 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:16:18.578 12:28:25 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:18.578 12:28:25 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:18.578 12:28:25 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:18.578 12:28:25 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:18.578 12:28:25 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:18.578 12:28:25 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:18.578 12:28:25 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:18.838 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:20.222 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:20.222 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:20.222 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:20.484 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:20.484 00:16:20.484 real 0m51.939s 00:16:20.484 user 1m21.423s 00:16:20.484 sys 0m32.185s 00:16:20.484 ************************************ 00:16:20.484 END TEST blockdev_xnvme 00:16:20.484 ************************************ 00:16:20.484 12:28:27 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.484 12:28:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:20.484 12:28:27 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:20.484 12:28:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:20.484 12:28:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.484 12:28:27 -- common/autotest_common.sh@10 -- # set +x 00:16:20.745 ************************************ 00:16:20.745 START TEST ublk 00:16:20.745 ************************************ 00:16:20.745 12:28:27 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:20.745 * Looking for test storage... 00:16:20.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:20.745 12:28:27 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:20.745 12:28:27 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:16:20.745 12:28:27 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:20.745 12:28:27 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:20.745 12:28:27 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:20.745 12:28:27 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:20.745 12:28:27 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:20.745 12:28:27 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.745 12:28:27 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:20.745 12:28:27 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:20.745 12:28:27 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:20.745 12:28:27 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:20.745 12:28:27 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:20.745 12:28:27 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:20.745 12:28:27 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:20.745 12:28:27 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:20.745 12:28:27 ublk -- scripts/common.sh@345 -- # : 1 00:16:20.745 12:28:27 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:20.745 12:28:27 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.745 12:28:27 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:20.745 12:28:27 ublk -- scripts/common.sh@353 -- # local d=1 00:16:20.745 12:28:27 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.745 12:28:27 ublk -- scripts/common.sh@355 -- # echo 1 00:16:20.745 12:28:27 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:20.745 12:28:27 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:20.745 12:28:27 ublk -- scripts/common.sh@353 -- # local d=2 00:16:20.745 12:28:27 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.745 12:28:27 ublk -- scripts/common.sh@355 -- # echo 2 00:16:20.745 12:28:27 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:20.745 12:28:27 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:20.745 12:28:27 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:20.745 12:28:27 ublk -- scripts/common.sh@368 -- # return 0 00:16:20.745 12:28:27 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.745 12:28:27 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:20.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.745 --rc genhtml_branch_coverage=1 00:16:20.745 --rc genhtml_function_coverage=1 00:16:20.745 --rc genhtml_legend=1 00:16:20.745 --rc geninfo_all_blocks=1 00:16:20.745 --rc geninfo_unexecuted_blocks=1 00:16:20.745 00:16:20.745 ' 00:16:20.745 12:28:27 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:20.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.745 --rc genhtml_branch_coverage=1 00:16:20.745 --rc genhtml_function_coverage=1 00:16:20.745 --rc genhtml_legend=1 00:16:20.745 --rc geninfo_all_blocks=1 00:16:20.745 --rc geninfo_unexecuted_blocks=1 00:16:20.745 00:16:20.745 ' 00:16:20.745 12:28:27 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:20.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.745 --rc genhtml_branch_coverage=1 00:16:20.745 --rc genhtml_function_coverage=1 00:16:20.745 --rc genhtml_legend=1 00:16:20.745 --rc geninfo_all_blocks=1 00:16:20.745 --rc geninfo_unexecuted_blocks=1 00:16:20.745 00:16:20.745 ' 00:16:20.745 12:28:27 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:20.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.745 --rc genhtml_branch_coverage=1 00:16:20.745 --rc genhtml_function_coverage=1 00:16:20.745 --rc genhtml_legend=1 00:16:20.745 --rc geninfo_all_blocks=1 00:16:20.745 --rc geninfo_unexecuted_blocks=1 00:16:20.745 00:16:20.745 ' 00:16:20.745 12:28:27 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:20.745 12:28:27 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:20.745 12:28:27 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:20.745 12:28:27 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:20.745 12:28:27 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:20.745 12:28:27 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:20.745 12:28:27 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:20.745 12:28:27 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:20.745 12:28:27 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:20.745 12:28:27 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:20.745 12:28:27 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:20.745 12:28:27 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:20.745 12:28:27 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:20.745 12:28:27 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:20.745 12:28:27 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:20.745 12:28:27 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:20.745 12:28:27 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:20.745 12:28:27 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:20.745 12:28:27 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:20.745 12:28:27 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:20.745 12:28:27 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:20.745 12:28:27 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.745 12:28:27 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:20.745 ************************************ 00:16:20.745 START TEST test_save_ublk_config 00:16:20.745 ************************************ 00:16:20.745 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:20.746 12:28:27 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:20.746 12:28:27 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75270 00:16:20.746 12:28:27 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:20.746 12:28:27 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75270 00:16:20.746 12:28:27 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:20.746 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75270 ']' 00:16:20.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.746 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.746 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.746 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.746 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.746 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:21.007 [2024-12-16 12:28:27.859390] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:21.007 [2024-12-16 12:28:27.859528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75270 ] 00:16:21.007 [2024-12-16 12:28:28.024554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.269 [2024-12-16 12:28:28.149133] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.841 12:28:28 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.841 12:28:28 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:21.841 12:28:28 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:21.841 12:28:28 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:21.841 12:28:28 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.841 12:28:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:21.841 [2024-12-16 12:28:28.861183] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:21.841 [2024-12-16 12:28:28.862057] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:21.841 malloc0 00:16:21.841 [2024-12-16 12:28:28.933320] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:21.841 [2024-12-16 12:28:28.933428] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:21.841 [2024-12-16 12:28:28.933440] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:21.841 [2024-12-16 12:28:28.933448] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:21.841 [2024-12-16 12:28:28.942290] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:21.841 [2024-12-16 12:28:28.942322] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:22.102 [2024-12-16 12:28:28.949200] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:22.102 [2024-12-16 12:28:28.949313] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:22.102 [2024-12-16 12:28:28.965194] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:22.102 0 00:16:22.102 12:28:28 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.102 12:28:28 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:22.102 12:28:28 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.102 12:28:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:22.363 12:28:29 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.363 12:28:29 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:22.363 "subsystems": [ 00:16:22.363 { 00:16:22.363 "subsystem": "fsdev", 00:16:22.363 "config": [ 00:16:22.363 { 00:16:22.363 "method": "fsdev_set_opts", 00:16:22.363 "params": { 00:16:22.363 "fsdev_io_pool_size": 65535, 00:16:22.363 "fsdev_io_cache_size": 256 00:16:22.363 } 00:16:22.363 } 00:16:22.363 ] 00:16:22.363 }, 00:16:22.363 { 00:16:22.363 "subsystem": "keyring", 00:16:22.363 "config": [] 00:16:22.363 }, 00:16:22.363 { 00:16:22.363 "subsystem": "iobuf", 00:16:22.363 "config": [ 00:16:22.363 { 00:16:22.363 "method": "iobuf_set_options", 00:16:22.363 "params": { 00:16:22.363 "small_pool_count": 8192, 00:16:22.363 "large_pool_count": 1024, 00:16:22.363 "small_bufsize": 8192, 00:16:22.363 "large_bufsize": 135168, 00:16:22.363 "enable_numa": false 00:16:22.363 } 00:16:22.363 } 00:16:22.363 ] 00:16:22.363 }, 00:16:22.363 { 00:16:22.363 "subsystem": "sock", 00:16:22.363 "config": [ 00:16:22.363 { 00:16:22.363 "method": "sock_set_default_impl", 00:16:22.363 "params": { 00:16:22.363 "impl_name": "posix" 00:16:22.363 } 00:16:22.363 }, 00:16:22.363 { 00:16:22.363 "method": "sock_impl_set_options", 00:16:22.363 "params": { 00:16:22.363 "impl_name": "ssl", 00:16:22.363 "recv_buf_size": 4096, 00:16:22.363 "send_buf_size": 4096, 00:16:22.363 "enable_recv_pipe": true, 00:16:22.363 "enable_quickack": false, 00:16:22.363 "enable_placement_id": 0, 00:16:22.363 "enable_zerocopy_send_server": true, 00:16:22.363 "enable_zerocopy_send_client": false, 00:16:22.363 "zerocopy_threshold": 0, 00:16:22.363 "tls_version": 0, 00:16:22.363 "enable_ktls": false 00:16:22.363 } 00:16:22.363 }, 00:16:22.363 { 00:16:22.363 "method": "sock_impl_set_options", 00:16:22.363 "params": { 00:16:22.363 "impl_name": "posix", 00:16:22.363 "recv_buf_size": 2097152, 00:16:22.363 "send_buf_size": 2097152, 00:16:22.363 "enable_recv_pipe": true, 00:16:22.363 "enable_quickack": false, 00:16:22.363 "enable_placement_id": 0, 00:16:22.363 "enable_zerocopy_send_server": true, 00:16:22.363 "enable_zerocopy_send_client": false, 00:16:22.363 "zerocopy_threshold": 0, 00:16:22.363 "tls_version": 0, 00:16:22.363 "enable_ktls": false 00:16:22.363 } 00:16:22.363 } 00:16:22.363 ] 00:16:22.363 }, 00:16:22.363 { 00:16:22.363 "subsystem": "vmd", 00:16:22.363 "config": [] 00:16:22.363 }, 00:16:22.363 { 00:16:22.363 "subsystem": "accel", 00:16:22.363 "config": [ 00:16:22.363 { 00:16:22.363 "method": "accel_set_options", 00:16:22.363 "params": { 00:16:22.363 "small_cache_size": 128, 00:16:22.363 "large_cache_size": 16, 00:16:22.363 "task_count": 2048, 00:16:22.363 "sequence_count": 2048, 00:16:22.363 "buf_count": 2048 00:16:22.363 } 00:16:22.363 } 00:16:22.363 ] 00:16:22.363 }, 00:16:22.363 { 00:16:22.363 "subsystem": "bdev", 00:16:22.363 "config": [ 00:16:22.363 { 00:16:22.363 "method": "bdev_set_options", 00:16:22.363 "params": { 00:16:22.363 "bdev_io_pool_size": 65535, 00:16:22.363 "bdev_io_cache_size": 256, 00:16:22.363 "bdev_auto_examine": true, 00:16:22.363 "iobuf_small_cache_size": 128, 00:16:22.363 "iobuf_large_cache_size": 16 00:16:22.363 } 00:16:22.363 }, 00:16:22.363 { 00:16:22.363 "method": "bdev_raid_set_options", 00:16:22.363 "params": { 00:16:22.363 "process_window_size_kb": 1024, 00:16:22.363 "process_max_bandwidth_mb_sec": 0 00:16:22.363 } 00:16:22.363 }, 00:16:22.363 { 00:16:22.363 "method": "bdev_iscsi_set_options", 00:16:22.363 "params": { 00:16:22.363 "timeout_sec": 30 00:16:22.363 } 00:16:22.363 }, 00:16:22.363 { 00:16:22.363 "method": "bdev_nvme_set_options", 00:16:22.363 "params": { 00:16:22.363 "action_on_timeout": "none", 00:16:22.363 "timeout_us": 0, 00:16:22.363 "timeout_admin_us": 0, 00:16:22.363 "keep_alive_timeout_ms": 10000, 00:16:22.363 "arbitration_burst": 0, 00:16:22.363 "low_priority_weight": 0, 00:16:22.363 "medium_priority_weight": 0, 00:16:22.363 "high_priority_weight": 0, 00:16:22.363 "nvme_adminq_poll_period_us": 10000, 00:16:22.363 "nvme_ioq_poll_period_us": 0, 00:16:22.363 "io_queue_requests": 0, 00:16:22.363 "delay_cmd_submit": true, 00:16:22.363 "transport_retry_count": 4, 00:16:22.363 "bdev_retry_count": 3, 00:16:22.363 "transport_ack_timeout": 0, 00:16:22.363 "ctrlr_loss_timeout_sec": 0, 00:16:22.363 "reconnect_delay_sec": 0, 00:16:22.363 "fast_io_fail_timeout_sec": 0, 00:16:22.363 "disable_auto_failback": false, 00:16:22.363 "generate_uuids": false, 00:16:22.363 "transport_tos": 0, 00:16:22.363 "nvme_error_stat": false, 00:16:22.363 "rdma_srq_size": 0, 00:16:22.363 "io_path_stat": false, 00:16:22.363 "allow_accel_sequence": false, 00:16:22.363 "rdma_max_cq_size": 0, 00:16:22.363 "rdma_cm_event_timeout_ms": 0, 00:16:22.363 "dhchap_digests": [ 00:16:22.363 "sha256", 00:16:22.363 "sha384", 00:16:22.363 "sha512" 00:16:22.363 ], 00:16:22.363 "dhchap_dhgroups": [ 00:16:22.363 "null", 00:16:22.363 "ffdhe2048", 00:16:22.363 "ffdhe3072", 00:16:22.363 "ffdhe4096", 00:16:22.364 "ffdhe6144", 00:16:22.364 "ffdhe8192" 00:16:22.364 ], 00:16:22.364 "rdma_umr_per_io": false 00:16:22.364 } 00:16:22.364 }, 00:16:22.364 { 00:16:22.364 "method": "bdev_nvme_set_hotplug", 00:16:22.364 "params": { 00:16:22.364 "period_us": 100000, 00:16:22.364 "enable": false 00:16:22.364 } 00:16:22.364 }, 00:16:22.364 { 00:16:22.364 "method": "bdev_malloc_create", 00:16:22.364 "params": { 00:16:22.364 "name": "malloc0", 00:16:22.364 "num_blocks": 8192, 00:16:22.364 "block_size": 4096, 00:16:22.364 "physical_block_size": 4096, 00:16:22.364 "uuid": "934fc001-f33b-4602-a906-84975d8ecbf7", 00:16:22.364 "optimal_io_boundary": 0, 00:16:22.364 "md_size": 0, 00:16:22.364 "dif_type": 0, 00:16:22.364 "dif_is_head_of_md": false, 00:16:22.364 "dif_pi_format": 0 00:16:22.364 } 00:16:22.364 }, 00:16:22.364 { 00:16:22.364 "method": "bdev_wait_for_examine" 00:16:22.364 } 00:16:22.364 ] 00:16:22.364 }, 00:16:22.364 { 00:16:22.364 "subsystem": "scsi", 00:16:22.364 "config": null 00:16:22.364 }, 00:16:22.364 { 00:16:22.364 "subsystem": "scheduler", 00:16:22.364 "config": [ 00:16:22.364 { 00:16:22.364 "method": "framework_set_scheduler", 00:16:22.364 "params": { 00:16:22.364 "name": "static" 00:16:22.364 } 00:16:22.364 } 00:16:22.364 ] 00:16:22.364 }, 00:16:22.364 { 00:16:22.364 "subsystem": "vhost_scsi", 00:16:22.364 "config": [] 00:16:22.364 }, 00:16:22.364 { 00:16:22.364 "subsystem": "vhost_blk", 00:16:22.364 "config": [] 00:16:22.364 }, 00:16:22.364 { 00:16:22.364 "subsystem": "ublk", 00:16:22.364 "config": [ 00:16:22.364 { 00:16:22.364 "method": "ublk_create_target", 00:16:22.364 "params": { 00:16:22.364 "cpumask": "1" 00:16:22.364 } 00:16:22.364 }, 00:16:22.364 { 00:16:22.364 "method": "ublk_start_disk", 00:16:22.364 "params": { 00:16:22.364 "bdev_name": "malloc0", 00:16:22.364 "ublk_id": 0, 00:16:22.364 "num_queues": 1, 00:16:22.364 "queue_depth": 128 00:16:22.364 } 00:16:22.364 } 00:16:22.364 ] 00:16:22.364 }, 00:16:22.364 { 00:16:22.364 "subsystem": "nbd", 00:16:22.364 "config": [] 00:16:22.364 }, 00:16:22.364 { 00:16:22.364 "subsystem": "nvmf", 00:16:22.364 "config": [ 00:16:22.364 { 00:16:22.364 "method": "nvmf_set_config", 00:16:22.364 "params": { 00:16:22.364 "discovery_filter": "match_any", 00:16:22.364 "admin_cmd_passthru": { 00:16:22.364 "identify_ctrlr": false 00:16:22.364 }, 00:16:22.364 "dhchap_digests": [ 00:16:22.364 "sha256", 00:16:22.364 "sha384", 00:16:22.364 "sha512" 00:16:22.364 ], 00:16:22.364 "dhchap_dhgroups": [ 00:16:22.364 "null", 00:16:22.364 "ffdhe2048", 00:16:22.364 "ffdhe3072", 00:16:22.364 "ffdhe4096", 00:16:22.364 "ffdhe6144", 00:16:22.364 "ffdhe8192" 00:16:22.364 ] 00:16:22.364 } 00:16:22.364 }, 00:16:22.364 { 00:16:22.364 "method": "nvmf_set_max_subsystems", 00:16:22.364 "params": { 00:16:22.364 "max_subsystems": 1024 00:16:22.364 } 00:16:22.364 }, 00:16:22.364 { 00:16:22.364 "method": "nvmf_set_crdt", 00:16:22.364 "params": { 00:16:22.364 "crdt1": 0, 00:16:22.364 "crdt2": 0, 00:16:22.364 "crdt3": 0 00:16:22.364 } 00:16:22.364 } 00:16:22.364 ] 00:16:22.364 }, 00:16:22.364 { 00:16:22.364 "subsystem": "iscsi", 00:16:22.364 "config": [ 00:16:22.364 { 00:16:22.364 "method": "iscsi_set_options", 00:16:22.364 "params": { 00:16:22.364 "node_base": "iqn.2016-06.io.spdk", 00:16:22.364 "max_sessions": 128, 00:16:22.364 "max_connections_per_session": 2, 00:16:22.364 "max_queue_depth": 64, 00:16:22.364 "default_time2wait": 2, 00:16:22.364 "default_time2retain": 20, 00:16:22.364 "first_burst_length": 8192, 00:16:22.364 "immediate_data": true, 00:16:22.364 "allow_duplicated_isid": false, 00:16:22.364 "error_recovery_level": 0, 00:16:22.364 "nop_timeout": 60, 00:16:22.364 "nop_in_interval": 30, 00:16:22.364 "disable_chap": false, 00:16:22.364 "require_chap": false, 00:16:22.364 "mutual_chap": false, 00:16:22.364 "chap_group": 0, 00:16:22.364 "max_large_datain_per_connection": 64, 00:16:22.364 "max_r2t_per_connection": 4, 00:16:22.364 "pdu_pool_size": 36864, 00:16:22.364 "immediate_data_pool_size": 16384, 00:16:22.364 "data_out_pool_size": 2048 00:16:22.364 } 00:16:22.364 } 00:16:22.364 ] 00:16:22.364 } 00:16:22.364 ] 00:16:22.364 }' 00:16:22.364 12:28:29 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75270 00:16:22.364 12:28:29 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75270 ']' 00:16:22.364 12:28:29 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75270 00:16:22.364 12:28:29 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:22.364 12:28:29 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.364 12:28:29 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75270 00:16:22.364 killing process with pid 75270 00:16:22.364 12:28:29 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:22.364 12:28:29 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:22.364 12:28:29 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75270' 00:16:22.364 12:28:29 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75270 00:16:22.364 12:28:29 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75270 00:16:23.308 [2024-12-16 12:28:30.395147] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:23.570 [2024-12-16 12:28:30.432206] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:23.570 [2024-12-16 12:28:30.432356] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:23.570 [2024-12-16 12:28:30.442210] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:23.570 [2024-12-16 12:28:30.442281] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:23.570 [2024-12-16 12:28:30.442295] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:23.570 [2024-12-16 12:28:30.442325] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:23.570 [2024-12-16 12:28:30.442478] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:24.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.948 12:28:31 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75331 00:16:24.948 12:28:31 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75331 00:16:24.948 12:28:31 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75331 ']' 00:16:24.948 12:28:31 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.948 12:28:31 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.948 12:28:31 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.948 12:28:31 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.948 12:28:31 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:24.948 12:28:31 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:24.948 12:28:31 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:24.948 "subsystems": [ 00:16:24.948 { 00:16:24.948 "subsystem": "fsdev", 00:16:24.948 "config": [ 00:16:24.948 { 00:16:24.948 "method": "fsdev_set_opts", 00:16:24.948 "params": { 00:16:24.948 "fsdev_io_pool_size": 65535, 00:16:24.948 "fsdev_io_cache_size": 256 00:16:24.948 } 00:16:24.948 } 00:16:24.948 ] 00:16:24.948 }, 00:16:24.948 { 00:16:24.948 "subsystem": "keyring", 00:16:24.948 "config": [] 00:16:24.948 }, 00:16:24.948 { 00:16:24.948 "subsystem": "iobuf", 00:16:24.948 "config": [ 00:16:24.948 { 00:16:24.948 "method": "iobuf_set_options", 00:16:24.948 "params": { 00:16:24.948 "small_pool_count": 8192, 00:16:24.949 "large_pool_count": 1024, 00:16:24.949 "small_bufsize": 8192, 00:16:24.949 "large_bufsize": 135168, 00:16:24.949 "enable_numa": false 00:16:24.949 } 00:16:24.949 } 00:16:24.949 ] 00:16:24.949 }, 00:16:24.949 { 00:16:24.949 "subsystem": "sock", 00:16:24.949 "config": [ 00:16:24.949 { 00:16:24.949 "method": "sock_set_default_impl", 00:16:24.949 "params": { 00:16:24.949 "impl_name": "posix" 00:16:24.949 } 00:16:24.949 }, 00:16:24.949 { 00:16:24.949 "method": "sock_impl_set_options", 00:16:24.949 "params": { 00:16:24.949 "impl_name": "ssl", 00:16:24.949 "recv_buf_size": 4096, 00:16:24.949 "send_buf_size": 4096, 00:16:24.949 "enable_recv_pipe": true, 00:16:24.949 "enable_quickack": false, 00:16:24.949 "enable_placement_id": 0, 00:16:24.949 "enable_zerocopy_send_server": true, 00:16:24.949 "enable_zerocopy_send_client": false, 00:16:24.949 "zerocopy_threshold": 0, 00:16:24.949 "tls_version": 0, 00:16:24.949 "enable_ktls": false 00:16:24.949 } 00:16:24.949 }, 00:16:24.949 { 00:16:24.949 "method": "sock_impl_set_options", 00:16:24.949 "params": { 00:16:24.949 "impl_name": "posix", 00:16:24.949 "recv_buf_size": 2097152, 00:16:24.949 "send_buf_size": 2097152, 00:16:24.949 "enable_recv_pipe": true, 00:16:24.949 "enable_quickack": false, 00:16:24.949 "enable_placement_id": 0, 00:16:24.949 "enable_zerocopy_send_server": true, 00:16:24.949 "enable_zerocopy_send_client": false, 00:16:24.949 "zerocopy_threshold": 0, 00:16:24.949 "tls_version": 0, 00:16:24.949 "enable_ktls": false 00:16:24.949 } 00:16:24.949 } 00:16:24.949 ] 00:16:24.949 }, 00:16:24.949 { 00:16:24.949 "subsystem": "vmd", 00:16:24.949 "config": [] 00:16:24.949 }, 00:16:24.949 { 00:16:24.949 "subsystem": "accel", 00:16:24.949 "config": [ 00:16:24.949 { 00:16:24.949 "method": "accel_set_options", 00:16:24.949 "params": { 00:16:24.949 "small_cache_size": 128, 00:16:24.949 "large_cache_size": 16, 00:16:24.949 "task_count": 2048, 00:16:24.949 "sequence_count": 2048, 00:16:24.949 "buf_count": 2048 00:16:24.949 } 00:16:24.949 } 00:16:24.949 ] 00:16:24.949 }, 00:16:24.949 { 00:16:24.949 "subsystem": "bdev", 00:16:24.949 "config": [ 00:16:24.949 { 00:16:24.949 "method": "bdev_set_options", 00:16:24.949 "params": { 00:16:24.949 "bdev_io_pool_size": 65535, 00:16:24.949 "bdev_io_cache_size": 256, 00:16:24.949 "bdev_auto_examine": true, 00:16:24.949 "iobuf_small_cache_size": 128, 00:16:24.949 "iobuf_large_cache_size": 16 00:16:24.949 } 00:16:24.949 }, 00:16:24.949 { 00:16:24.949 "method": "bdev_raid_set_options", 00:16:24.949 "params": { 00:16:24.949 "process_window_size_kb": 1024, 00:16:24.949 "process_max_bandwidth_mb_sec": 0 00:16:24.949 } 00:16:24.949 }, 00:16:24.949 { 00:16:24.949 "method": "bdev_iscsi_set_options", 00:16:24.949 "params": { 00:16:24.949 "timeout_sec": 30 00:16:24.949 } 00:16:24.949 }, 00:16:24.949 { 00:16:24.949 "method": "bdev_nvme_set_options", 00:16:24.949 "params": { 00:16:24.949 "action_on_timeout": "none", 00:16:24.949 "timeout_us": 0, 00:16:24.949 "timeout_admin_us": 0, 00:16:24.949 "keep_alive_timeout_ms": 10000, 00:16:24.949 "arbitration_burst": 0, 00:16:24.949 "low_priority_weight": 0, 00:16:24.949 "medium_priority_weight": 0, 00:16:24.949 "high_priority_weight": 0, 00:16:24.949 "nvme_adminq_poll_period_us": 10000, 00:16:24.949 "nvme_ioq_poll_period_us": 0, 00:16:24.949 "io_queue_requests": 0, 00:16:24.949 "delay_cmd_submit": true, 00:16:24.949 "transport_retry_count": 4, 00:16:24.949 "bdev_retry_count": 3, 00:16:24.949 "transport_ack_timeout": 0, 00:16:24.949 "ctrlr_loss_timeout_sec": 0, 00:16:24.949 "reconnect_delay_sec": 0, 00:16:24.949 "fast_io_fail_timeout_sec": 0, 00:16:24.949 "disable_auto_failback": false, 00:16:24.949 "generate_uuids": false, 00:16:24.949 "transport_tos": 0, 00:16:24.949 "nvme_error_stat": false, 00:16:24.949 "rdma_srq_size": 0, 00:16:24.949 "io_path_stat": false, 00:16:24.949 "allow_accel_sequence": false, 00:16:24.949 "rdma_max_cq_size": 0, 00:16:24.949 "rdma_cm_event_timeout_ms": 0, 00:16:24.949 "dhchap_digests": [ 00:16:24.949 "sha256", 00:16:24.949 "sha384", 00:16:24.949 "sha512" 00:16:24.949 ], 00:16:24.949 "dhchap_dhgroups": [ 00:16:24.949 "null", 00:16:24.949 "ffdhe2048", 00:16:24.949 "ffdhe3072", 00:16:24.949 "ffdhe4096", 00:16:24.949 "ffdhe6144", 00:16:24.949 "ffdhe8192" 00:16:24.949 ], 00:16:24.949 "rdma_umr_per_io": false 00:16:24.949 } 00:16:24.949 }, 00:16:24.949 { 00:16:24.949 "method": "bdev_nvme_set_hotplug", 00:16:24.949 "params": { 00:16:24.949 "period_us": 100000, 00:16:24.949 "enable": false 00:16:24.949 } 00:16:24.949 }, 00:16:24.949 { 00:16:24.949 "method": "bdev_malloc_create", 00:16:24.949 "params": { 00:16:24.949 "name": "malloc0", 00:16:24.949 "num_blocks": 8192, 00:16:24.949 "block_size": 4096, 00:16:24.949 "physical_block_size": 4096, 00:16:24.949 "uuid": "934fc001-f33b-4602-a906-84975d8ecbf7", 00:16:24.949 "optimal_io_boundary": 0, 00:16:24.949 "md_size": 0, 00:16:24.949 "dif_type": 0, 00:16:24.949 "dif_is_head_of_md": false, 00:16:24.949 "dif_pi_format": 0 00:16:24.949 } 00:16:24.949 }, 00:16:24.949 { 00:16:24.949 "method": "bdev_wait_for_examine" 00:16:24.949 } 00:16:24.949 ] 00:16:24.949 }, 00:16:24.949 { 00:16:24.949 "subsystem": "scsi", 00:16:24.949 "config": null 00:16:24.949 }, 00:16:24.949 { 00:16:24.949 "subsystem": "scheduler", 00:16:24.949 "config": [ 00:16:24.949 { 00:16:24.949 "method": "framework_set_scheduler", 00:16:24.949 "params": { 00:16:24.949 "name": "static" 00:16:24.949 } 00:16:24.949 } 00:16:24.949 ] 00:16:24.949 }, 00:16:24.949 { 00:16:24.949 "subsystem": "vhost_scsi", 00:16:24.949 "config": [] 00:16:24.949 }, 00:16:24.949 { 00:16:24.949 "subsystem": "vhost_blk", 00:16:24.949 "config": [] 00:16:24.949 }, 00:16:24.949 { 00:16:24.949 "subsystem": "ublk", 00:16:24.949 "config": [ 00:16:24.949 { 00:16:24.949 "method": "ublk_create_target", 00:16:24.949 "params": { 00:16:24.949 "cpumask": "1" 00:16:24.949 } 00:16:24.949 }, 00:16:24.949 { 00:16:24.949 "method": "ublk_start_disk", 00:16:24.949 "params": { 00:16:24.949 "bdev_name": "malloc0", 00:16:24.949 "ublk_id": 0, 00:16:24.949 "num_queues": 1, 00:16:24.949 "queue_depth": 128 00:16:24.949 } 00:16:24.949 } 00:16:24.949 ] 00:16:24.949 }, 00:16:24.949 { 00:16:24.949 "subsystem": "nbd", 00:16:24.949 "config": [] 00:16:24.949 }, 00:16:24.949 { 00:16:24.949 "subsystem": "nvmf", 00:16:24.949 "config": [ 00:16:24.949 { 00:16:24.949 "method": "nvmf_set_config", 00:16:24.949 "params": { 00:16:24.949 "discovery_filter": "match_any", 00:16:24.949 "admin_cmd_passthru": { 00:16:24.949 "identify_ctrlr": false 00:16:24.949 }, 00:16:24.949 "dhchap_digests": [ 00:16:24.949 "sha256", 00:16:24.949 "sha384", 00:16:24.949 "sha512" 00:16:24.949 ], 00:16:24.949 "dhchap_dhgroups": [ 00:16:24.949 "null", 00:16:24.949 "ffdhe2048", 00:16:24.949 "ffdhe3072", 00:16:24.949 "ffdhe4096", 00:16:24.949 "ffdhe6144", 00:16:24.949 "ffdhe8192" 00:16:24.949 ] 00:16:24.949 } 00:16:24.949 }, 00:16:24.949 { 00:16:24.949 "method": "nvmf_set_max_subsystems", 00:16:24.949 "params": { 00:16:24.949 "max_subsystems": 1024 00:16:24.949 } 00:16:24.949 }, 00:16:24.949 { 00:16:24.949 "method": "nvmf_set_crdt", 00:16:24.949 "params": { 00:16:24.949 "crdt1": 0, 00:16:24.949 "crdt2": 0, 00:16:24.949 "crdt3": 0 00:16:24.949 } 00:16:24.949 } 00:16:24.949 ] 00:16:24.949 }, 00:16:24.949 { 00:16:24.949 "subsystem": "iscsi", 00:16:24.949 "config": [ 00:16:24.949 { 00:16:24.949 "method": "iscsi_set_options", 00:16:24.949 "params": { 00:16:24.949 "node_base": "iqn.2016-06.io.spdk", 00:16:24.949 "max_sessions": 128, 00:16:24.949 "max_connections_per_session": 2, 00:16:24.949 "max_queue_depth": 64, 00:16:24.949 "default_time2wait": 2, 00:16:24.949 "default_time2retain": 20, 00:16:24.949 "first_burst_length": 8192, 00:16:24.949 "immediate_data": true, 00:16:24.949 "allow_duplicated_isid": false, 00:16:24.949 "error_recovery_level": 0, 00:16:24.949 "nop_timeout": 60, 00:16:24.949 "nop_in_interval": 30, 00:16:24.949 "disable_chap": false, 00:16:24.949 "require_chap": false, 00:16:24.949 "mutual_chap": false, 00:16:24.949 "chap_group": 0, 00:16:24.949 "max_large_datain_per_connection": 64, 00:16:24.949 "max_r2t_per_connection": 4, 00:16:24.949 "pdu_pool_size": 36864, 00:16:24.949 "immediate_data_pool_size": 16384, 00:16:24.949 "data_out_pool_size": 2048 00:16:24.949 } 00:16:24.949 } 00:16:24.949 ] 00:16:24.949 } 00:16:24.949 ] 00:16:24.949 }' 00:16:24.949 [2024-12-16 12:28:31.788710] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:24.950 [2024-12-16 12:28:31.788989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75331 ] 00:16:24.950 [2024-12-16 12:28:31.934034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.950 [2024-12-16 12:28:32.007788] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.884 [2024-12-16 12:28:32.651172] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:25.884 [2024-12-16 12:28:32.651792] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:25.884 [2024-12-16 12:28:32.659258] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:25.884 [2024-12-16 12:28:32.659371] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:25.884 [2024-12-16 12:28:32.659395] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:25.884 [2024-12-16 12:28:32.659439] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:25.884 [2024-12-16 12:28:32.668220] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:25.884 [2024-12-16 12:28:32.668303] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:25.884 [2024-12-16 12:28:32.675179] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:25.884 [2024-12-16 12:28:32.675247] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:25.884 [2024-12-16 12:28:32.692171] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:25.884 12:28:32 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.884 12:28:32 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:25.884 12:28:32 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:25.884 12:28:32 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:25.884 12:28:32 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.884 12:28:32 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:25.884 12:28:32 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.884 12:28:32 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:25.884 12:28:32 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:25.884 12:28:32 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75331 00:16:25.884 12:28:32 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75331 ']' 00:16:25.884 12:28:32 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75331 00:16:25.884 12:28:32 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:25.884 12:28:32 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.884 12:28:32 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75331 00:16:25.884 12:28:32 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:25.884 12:28:32 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:25.884 12:28:32 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75331' 00:16:25.884 killing process with pid 75331 00:16:25.884 12:28:32 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75331 00:16:25.884 12:28:32 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75331 00:16:26.820 [2024-12-16 12:28:33.778808] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:26.820 [2024-12-16 12:28:33.815236] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:26.820 [2024-12-16 12:28:33.815400] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:26.820 [2024-12-16 12:28:33.824182] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:26.820 [2024-12-16 12:28:33.824242] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:26.820 [2024-12-16 12:28:33.824259] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:26.820 [2024-12-16 12:28:33.824326] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:26.820 [2024-12-16 12:28:33.824446] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:28.196 12:28:34 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:28.196 00:16:28.196 real 0m7.234s 00:16:28.196 user 0m4.939s 00:16:28.196 sys 0m2.890s 00:16:28.196 12:28:35 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.196 12:28:35 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:28.196 ************************************ 00:16:28.196 END TEST test_save_ublk_config 00:16:28.196 ************************************ 00:16:28.196 12:28:35 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75402 00:16:28.196 12:28:35 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:28.196 12:28:35 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75402 00:16:28.196 12:28:35 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:28.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.196 12:28:35 ublk -- common/autotest_common.sh@835 -- # '[' -z 75402 ']' 00:16:28.196 12:28:35 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.196 12:28:35 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.196 12:28:35 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.196 12:28:35 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.196 12:28:35 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.196 [2024-12-16 12:28:35.125288] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:28.196 [2024-12-16 12:28:35.125582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75402 ] 00:16:28.196 [2024-12-16 12:28:35.279846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:28.457 [2024-12-16 12:28:35.403613] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.457 [2024-12-16 12:28:35.403704] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.029 12:28:36 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.029 12:28:36 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:29.029 12:28:36 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:29.030 12:28:36 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:29.030 12:28:36 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.030 12:28:36 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:29.030 ************************************ 00:16:29.030 START TEST test_create_ublk 00:16:29.030 ************************************ 00:16:29.030 12:28:36 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:29.030 12:28:36 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:29.030 12:28:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.030 12:28:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:29.030 [2024-12-16 12:28:36.129187] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:29.030 [2024-12-16 12:28:36.131611] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:29.030 12:28:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.030 12:28:36 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:29.291 12:28:36 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:29.291 12:28:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.292 12:28:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:29.292 12:28:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.292 12:28:36 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:29.292 12:28:36 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:29.292 12:28:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.292 12:28:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:29.292 [2024-12-16 12:28:36.390399] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:29.292 [2024-12-16 12:28:36.390897] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:29.292 [2024-12-16 12:28:36.390919] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:29.292 [2024-12-16 12:28:36.390929] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:29.553 [2024-12-16 12:28:36.399598] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:29.553 [2024-12-16 12:28:36.399635] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:29.553 [2024-12-16 12:28:36.406230] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:29.553 [2024-12-16 12:28:36.407023] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:29.553 [2024-12-16 12:28:36.427215] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:29.553 12:28:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.553 12:28:36 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:29.553 12:28:36 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:29.553 12:28:36 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:29.553 12:28:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.553 12:28:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:29.553 12:28:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.553 12:28:36 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:29.553 { 00:16:29.553 "ublk_device": "/dev/ublkb0", 00:16:29.553 "id": 0, 00:16:29.553 "queue_depth": 512, 00:16:29.553 "num_queues": 4, 00:16:29.553 "bdev_name": "Malloc0" 00:16:29.553 } 00:16:29.553 ]' 00:16:29.553 12:28:36 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:29.553 12:28:36 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:29.553 12:28:36 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:29.553 12:28:36 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:29.553 12:28:36 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:29.553 12:28:36 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:29.553 12:28:36 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:29.553 12:28:36 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:29.553 12:28:36 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:29.553 12:28:36 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:29.553 12:28:36 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:29.553 12:28:36 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:29.553 12:28:36 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:29.553 12:28:36 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:29.553 12:28:36 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:29.553 12:28:36 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:29.553 12:28:36 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:29.553 12:28:36 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:29.553 12:28:36 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:29.553 12:28:36 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:29.553 12:28:36 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:29.553 12:28:36 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:29.815 fio: verification read phase will never start because write phase uses all of runtime 00:16:29.815 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:29.815 fio-3.35 00:16:29.815 Starting 1 process 00:16:39.787 00:16:39.787 fio_test: (groupid=0, jobs=1): err= 0: pid=75448: Mon Dec 16 12:28:46 2024 00:16:39.787 write: IOPS=14.2k, BW=55.4MiB/s (58.1MB/s)(554MiB/10001msec); 0 zone resets 00:16:39.787 clat (usec): min=43, max=9884, avg=69.69, stdev=135.07 00:16:39.787 lat (usec): min=44, max=9909, avg=70.14, stdev=135.12 00:16:39.787 clat percentiles (usec): 00:16:39.787 | 1.00th=[ 50], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 57], 00:16:39.787 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 65], 00:16:39.787 | 70.00th=[ 68], 80.00th=[ 70], 90.00th=[ 73], 95.00th=[ 77], 00:16:39.787 | 99.00th=[ 94], 99.50th=[ 212], 99.90th=[ 2966], 99.95th=[ 3589], 00:16:39.787 | 99.99th=[ 4080] 00:16:39.787 bw ( KiB/s): min=14848, max=66280, per=99.30%, avg=56351.16, stdev=10716.94, samples=19 00:16:39.787 iops : min= 3712, max=16570, avg=14087.68, stdev=2679.21, samples=19 00:16:39.787 lat (usec) : 50=1.32%, 100=97.80%, 250=0.46%, 500=0.20%, 750=0.01% 00:16:39.787 lat (usec) : 1000=0.01% 00:16:39.787 lat (msec) : 2=0.06%, 4=0.13%, 10=0.02% 00:16:39.787 cpu : usr=2.23%, sys=12.66%, ctx=141891, majf=0, minf=796 00:16:39.787 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:39.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.787 issued rwts: total=0,141885,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:39.787 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:39.787 00:16:39.787 Run status group 0 (all jobs): 00:16:39.787 WRITE: bw=55.4MiB/s (58.1MB/s), 55.4MiB/s-55.4MiB/s (58.1MB/s-58.1MB/s), io=554MiB (581MB), run=10001-10001msec 00:16:39.787 00:16:39.787 Disk stats (read/write): 00:16:39.787 ublkb0: ios=0/140209, merge=0/0, ticks=0/8299, in_queue=8299, util=99.08% 00:16:39.787 12:28:46 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:39.787 12:28:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.787 12:28:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.787 [2024-12-16 12:28:46.855964] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:40.045 [2024-12-16 12:28:46.897219] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:40.045 [2024-12-16 12:28:46.897896] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:40.045 [2024-12-16 12:28:46.903180] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:40.045 [2024-12-16 12:28:46.903433] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:40.045 [2024-12-16 12:28:46.903444] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:40.045 12:28:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.045 12:28:46 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:40.045 12:28:46 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:40.045 12:28:46 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:40.045 12:28:46 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:40.045 12:28:46 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.045 12:28:46 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:40.045 12:28:46 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.045 12:28:46 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:40.045 12:28:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.045 12:28:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.045 [2024-12-16 12:28:46.911236] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:40.045 request: 00:16:40.045 { 00:16:40.045 "ublk_id": 0, 00:16:40.045 "method": "ublk_stop_disk", 00:16:40.045 "req_id": 1 00:16:40.045 } 00:16:40.045 Got JSON-RPC error response 00:16:40.045 response: 00:16:40.045 { 00:16:40.045 "code": -19, 00:16:40.045 "message": "No such device" 00:16:40.045 } 00:16:40.045 12:28:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:40.045 12:28:46 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:40.045 12:28:46 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:40.045 12:28:46 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:40.045 12:28:46 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:40.045 12:28:46 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:40.045 12:28:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.045 12:28:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.045 [2024-12-16 12:28:46.927245] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:40.045 [2024-12-16 12:28:46.935178] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:40.045 [2024-12-16 12:28:46.935206] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:40.045 12:28:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.045 12:28:46 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:40.045 12:28:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.045 12:28:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.303 12:28:47 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.303 12:28:47 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:40.303 12:28:47 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:40.303 12:28:47 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.303 12:28:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.303 12:28:47 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.303 12:28:47 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:40.303 12:28:47 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:40.303 12:28:47 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:40.303 12:28:47 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:40.303 12:28:47 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.303 12:28:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.303 12:28:47 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.303 12:28:47 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:40.303 12:28:47 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:40.303 ************************************ 00:16:40.303 END TEST test_create_ublk 00:16:40.303 ************************************ 00:16:40.303 12:28:47 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:40.303 00:16:40.303 real 0m11.288s 00:16:40.303 user 0m0.535s 00:16:40.303 sys 0m1.354s 00:16:40.303 12:28:47 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.303 12:28:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.561 12:28:47 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:40.561 12:28:47 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:40.561 12:28:47 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.561 12:28:47 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.561 ************************************ 00:16:40.561 START TEST test_create_multi_ublk 00:16:40.561 ************************************ 00:16:40.561 12:28:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:40.561 12:28:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:40.561 12:28:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.561 12:28:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.561 [2024-12-16 12:28:47.451173] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:40.561 [2024-12-16 12:28:47.452845] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:40.561 12:28:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.561 12:28:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:40.561 12:28:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:40.561 12:28:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:40.561 12:28:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:40.561 12:28:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.561 12:28:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.819 12:28:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.819 12:28:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:40.819 12:28:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:40.819 12:28:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.819 12:28:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.819 [2024-12-16 12:28:47.679298] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:40.819 [2024-12-16 12:28:47.679627] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:40.819 [2024-12-16 12:28:47.679639] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:40.819 [2024-12-16 12:28:47.679649] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:40.819 [2024-12-16 12:28:47.699180] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:40.819 [2024-12-16 12:28:47.699203] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:40.819 [2024-12-16 12:28:47.711179] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:40.819 [2024-12-16 12:28:47.711707] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:40.819 [2024-12-16 12:28:47.759180] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:40.819 12:28:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.820 12:28:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:40.820 12:28:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:40.820 12:28:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:40.820 12:28:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.820 12:28:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.078 12:28:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.078 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:41.078 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:41.078 12:28:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.078 12:28:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.078 [2024-12-16 12:28:48.023282] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:41.078 [2024-12-16 12:28:48.023601] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:41.078 [2024-12-16 12:28:48.023615] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:41.078 [2024-12-16 12:28:48.023620] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:41.078 [2024-12-16 12:28:48.035204] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:41.078 [2024-12-16 12:28:48.035220] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:41.078 [2024-12-16 12:28:48.047182] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:41.078 [2024-12-16 12:28:48.047721] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:41.078 [2024-12-16 12:28:48.072183] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:41.078 12:28:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.078 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:41.078 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:41.078 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:41.078 12:28:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.078 12:28:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.335 12:28:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.335 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:41.335 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:41.335 12:28:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.335 12:28:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.335 [2024-12-16 12:28:48.335272] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:41.335 [2024-12-16 12:28:48.335593] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:41.335 [2024-12-16 12:28:48.335604] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:41.335 [2024-12-16 12:28:48.335611] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:41.335 [2024-12-16 12:28:48.347193] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:41.335 [2024-12-16 12:28:48.347214] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:41.335 [2024-12-16 12:28:48.359177] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:41.335 [2024-12-16 12:28:48.359703] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:41.335 [2024-12-16 12:28:48.372180] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:41.335 12:28:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.335 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:41.335 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:41.335 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:41.335 12:28:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.336 12:28:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.594 12:28:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.594 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:41.594 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:41.594 12:28:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.594 12:28:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.594 [2024-12-16 12:28:48.575291] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:41.594 [2024-12-16 12:28:48.575611] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:41.594 [2024-12-16 12:28:48.575625] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:41.594 [2024-12-16 12:28:48.575631] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:41.594 [2024-12-16 12:28:48.583202] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:41.594 [2024-12-16 12:28:48.583219] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:41.594 [2024-12-16 12:28:48.591180] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:41.594 [2024-12-16 12:28:48.591705] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:41.594 [2024-12-16 12:28:48.600217] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:41.594 12:28:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.594 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:41.594 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:41.594 12:28:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.594 12:28:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.594 12:28:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.594 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:41.594 { 00:16:41.594 "ublk_device": "/dev/ublkb0", 00:16:41.594 "id": 0, 00:16:41.594 "queue_depth": 512, 00:16:41.594 "num_queues": 4, 00:16:41.594 "bdev_name": "Malloc0" 00:16:41.594 }, 00:16:41.594 { 00:16:41.594 "ublk_device": "/dev/ublkb1", 00:16:41.594 "id": 1, 00:16:41.594 "queue_depth": 512, 00:16:41.594 "num_queues": 4, 00:16:41.594 "bdev_name": "Malloc1" 00:16:41.594 }, 00:16:41.594 { 00:16:41.594 "ublk_device": "/dev/ublkb2", 00:16:41.594 "id": 2, 00:16:41.594 "queue_depth": 512, 00:16:41.594 "num_queues": 4, 00:16:41.594 "bdev_name": "Malloc2" 00:16:41.594 }, 00:16:41.594 { 00:16:41.594 "ublk_device": "/dev/ublkb3", 00:16:41.594 "id": 3, 00:16:41.594 "queue_depth": 512, 00:16:41.594 "num_queues": 4, 00:16:41.594 "bdev_name": "Malloc3" 00:16:41.594 } 00:16:41.594 ]' 00:16:41.594 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:41.594 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:41.594 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:41.594 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:41.594 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:41.594 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:41.594 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:41.865 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:41.865 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:41.865 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:41.865 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:41.865 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:41.865 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:41.865 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:41.865 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:41.865 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:41.865 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:41.865 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:41.865 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:41.865 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:41.865 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:41.865 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:41.865 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:41.865 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:41.865 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:42.123 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:42.123 12:28:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:42.123 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:42.123 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:42.123 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:42.123 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:42.123 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:42.123 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:42.123 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:42.123 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:42.123 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:42.123 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:42.123 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:42.123 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:42.123 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:42.123 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:42.123 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:42.382 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:42.382 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:42.382 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:42.382 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:42.382 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:42.382 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:42.382 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:42.382 12:28:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.382 12:28:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:42.382 [2024-12-16 12:28:49.279251] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:42.382 [2024-12-16 12:28:49.318178] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:42.382 [2024-12-16 12:28:49.319084] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:42.382 [2024-12-16 12:28:49.327176] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:42.382 [2024-12-16 12:28:49.327447] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:42.382 [2024-12-16 12:28:49.327463] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:42.382 12:28:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.382 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:42.382 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:42.382 12:28:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.382 12:28:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:42.382 [2024-12-16 12:28:49.335267] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:42.382 [2024-12-16 12:28:49.371225] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:42.382 [2024-12-16 12:28:49.371999] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:42.382 [2024-12-16 12:28:49.375426] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:42.382 [2024-12-16 12:28:49.375688] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:42.382 [2024-12-16 12:28:49.375701] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:42.382 12:28:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.382 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:42.382 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:42.382 12:28:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.382 12:28:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:42.382 [2024-12-16 12:28:49.394247] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:42.382 [2024-12-16 12:28:49.436748] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:42.382 [2024-12-16 12:28:49.437749] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:42.382 [2024-12-16 12:28:49.447227] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:42.382 [2024-12-16 12:28:49.451386] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:42.382 [2024-12-16 12:28:49.451400] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:42.382 12:28:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.382 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:42.382 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:42.382 12:28:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.382 12:28:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:42.382 [2024-12-16 12:28:49.455315] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:42.640 [2024-12-16 12:28:49.498207] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:42.640 [2024-12-16 12:28:49.498851] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:42.640 [2024-12-16 12:28:49.506187] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:42.640 [2024-12-16 12:28:49.506414] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:42.640 [2024-12-16 12:28:49.506428] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:42.640 12:28:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.640 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:42.640 [2024-12-16 12:28:49.698229] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:42.640 [2024-12-16 12:28:49.706174] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:42.640 [2024-12-16 12:28:49.706201] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:42.640 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:42.640 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:42.640 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:42.640 12:28:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.640 12:28:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.207 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.207 12:28:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.207 12:28:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:43.207 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.207 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.465 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.465 12:28:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.465 12:28:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:43.465 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.465 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.792 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.792 12:28:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.792 12:28:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:43.792 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.792 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.378 12:28:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.378 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:44.378 12:28:51 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:44.378 12:28:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.378 12:28:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.378 12:28:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.378 12:28:51 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:44.378 12:28:51 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:44.378 12:28:51 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:44.378 12:28:51 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:44.378 12:28:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.378 12:28:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.378 12:28:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.378 12:28:51 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:44.378 12:28:51 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:44.378 ************************************ 00:16:44.378 END TEST test_create_multi_ublk 00:16:44.378 ************************************ 00:16:44.378 12:28:51 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:44.378 00:16:44.378 real 0m3.874s 00:16:44.378 user 0m0.826s 00:16:44.378 sys 0m0.143s 00:16:44.378 12:28:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.378 12:28:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.378 12:28:51 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:44.378 12:28:51 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:44.378 12:28:51 ublk -- ublk/ublk.sh@130 -- # killprocess 75402 00:16:44.378 12:28:51 ublk -- common/autotest_common.sh@954 -- # '[' -z 75402 ']' 00:16:44.378 12:28:51 ublk -- common/autotest_common.sh@958 -- # kill -0 75402 00:16:44.378 12:28:51 ublk -- common/autotest_common.sh@959 -- # uname 00:16:44.378 12:28:51 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:44.378 12:28:51 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75402 00:16:44.378 killing process with pid 75402 00:16:44.378 12:28:51 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:44.378 12:28:51 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:44.378 12:28:51 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75402' 00:16:44.378 12:28:51 ublk -- common/autotest_common.sh@973 -- # kill 75402 00:16:44.378 12:28:51 ublk -- common/autotest_common.sh@978 -- # wait 75402 00:16:45.313 [2024-12-16 12:28:52.302632] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:45.313 [2024-12-16 12:28:52.302680] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:45.882 00:16:45.882 real 0m25.382s 00:16:45.882 user 0m36.071s 00:16:45.882 sys 0m10.323s 00:16:45.882 12:28:52 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:45.882 12:28:52 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.882 ************************************ 00:16:45.882 END TEST ublk 00:16:45.882 ************************************ 00:16:46.143 12:28:53 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:46.143 12:28:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:46.143 12:28:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.143 12:28:53 -- common/autotest_common.sh@10 -- # set +x 00:16:46.143 ************************************ 00:16:46.143 START TEST ublk_recovery 00:16:46.143 ************************************ 00:16:46.143 12:28:53 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:46.143 * Looking for test storage... 00:16:46.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:46.143 12:28:53 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:46.143 12:28:53 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:16:46.143 12:28:53 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:46.143 12:28:53 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:46.143 12:28:53 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:46.143 12:28:53 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:46.143 12:28:53 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:46.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.143 --rc genhtml_branch_coverage=1 00:16:46.143 --rc genhtml_function_coverage=1 00:16:46.143 --rc genhtml_legend=1 00:16:46.143 --rc geninfo_all_blocks=1 00:16:46.143 --rc geninfo_unexecuted_blocks=1 00:16:46.143 00:16:46.143 ' 00:16:46.143 12:28:53 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:46.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.143 --rc genhtml_branch_coverage=1 00:16:46.143 --rc genhtml_function_coverage=1 00:16:46.143 --rc genhtml_legend=1 00:16:46.143 --rc geninfo_all_blocks=1 00:16:46.143 --rc geninfo_unexecuted_blocks=1 00:16:46.143 00:16:46.143 ' 00:16:46.143 12:28:53 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:46.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.143 --rc genhtml_branch_coverage=1 00:16:46.143 --rc genhtml_function_coverage=1 00:16:46.143 --rc genhtml_legend=1 00:16:46.143 --rc geninfo_all_blocks=1 00:16:46.143 --rc geninfo_unexecuted_blocks=1 00:16:46.143 00:16:46.143 ' 00:16:46.143 12:28:53 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:46.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.143 --rc genhtml_branch_coverage=1 00:16:46.143 --rc genhtml_function_coverage=1 00:16:46.143 --rc genhtml_legend=1 00:16:46.143 --rc geninfo_all_blocks=1 00:16:46.143 --rc geninfo_unexecuted_blocks=1 00:16:46.143 00:16:46.143 ' 00:16:46.143 12:28:53 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:46.143 12:28:53 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:46.143 12:28:53 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:46.143 12:28:53 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:46.143 12:28:53 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:46.143 12:28:53 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:46.143 12:28:53 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:46.143 12:28:53 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:46.143 12:28:53 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:46.143 12:28:53 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:46.143 12:28:53 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75810 00:16:46.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.143 12:28:53 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:46.143 12:28:53 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75810 00:16:46.143 12:28:53 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75810 ']' 00:16:46.143 12:28:53 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.143 12:28:53 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.143 12:28:53 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:46.143 12:28:53 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.143 12:28:53 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.143 12:28:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:46.404 [2024-12-16 12:28:53.273847] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:46.404 [2024-12-16 12:28:53.274247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75810 ] 00:16:46.404 [2024-12-16 12:28:53.436334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:46.664 [2024-12-16 12:28:53.580785] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.664 [2024-12-16 12:28:53.580927] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.230 12:28:54 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.230 12:28:54 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:47.230 12:28:54 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:47.230 12:28:54 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.230 12:28:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.230 [2024-12-16 12:28:54.251180] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:47.230 [2024-12-16 12:28:54.253175] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:47.230 12:28:54 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.230 12:28:54 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:47.230 12:28:54 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.230 12:28:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.488 malloc0 00:16:47.488 12:28:54 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.488 12:28:54 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:47.488 12:28:54 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.488 12:28:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.488 [2024-12-16 12:28:54.363314] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:47.488 [2024-12-16 12:28:54.363417] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:47.488 [2024-12-16 12:28:54.363428] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:47.488 [2024-12-16 12:28:54.363437] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:47.488 [2024-12-16 12:28:54.372308] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:47.488 [2024-12-16 12:28:54.372330] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:47.489 [2024-12-16 12:28:54.379185] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:47.489 [2024-12-16 12:28:54.379343] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:47.489 [2024-12-16 12:28:54.395202] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:47.489 1 00:16:47.489 12:28:54 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.489 12:28:54 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:48.423 12:28:55 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75845 00:16:48.423 12:28:55 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:48.423 12:28:55 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:48.423 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:48.423 fio-3.35 00:16:48.423 Starting 1 process 00:16:53.690 12:29:00 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75810 00:16:53.690 12:29:00 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:58.979 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75810 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:58.979 12:29:05 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=75961 00:16:58.979 12:29:05 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:58.979 12:29:05 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 75961 00:16:58.979 12:29:05 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:58.979 12:29:05 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75961 ']' 00:16:58.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.979 12:29:05 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.979 12:29:05 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.979 12:29:05 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.979 12:29:05 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.979 12:29:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.979 [2024-12-16 12:29:05.491817] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:58.979 [2024-12-16 12:29:05.491937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75961 ] 00:16:58.979 [2024-12-16 12:29:05.645764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:58.979 [2024-12-16 12:29:05.744507] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.979 [2024-12-16 12:29:05.744557] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.239 12:29:06 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.239 12:29:06 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:59.239 12:29:06 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:59.239 12:29:06 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.239 12:29:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:59.239 [2024-12-16 12:29:06.332179] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:59.239 [2024-12-16 12:29:06.334008] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:59.239 12:29:06 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.239 12:29:06 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:59.239 12:29:06 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.239 12:29:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:59.499 malloc0 00:16:59.499 12:29:06 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.499 12:29:06 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:59.499 12:29:06 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.499 12:29:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:59.499 [2024-12-16 12:29:06.430290] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:59.499 [2024-12-16 12:29:06.430320] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:59.499 [2024-12-16 12:29:06.430329] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:59.499 [2024-12-16 12:29:06.437212] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:59.499 [2024-12-16 12:29:06.437233] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:59.499 1 00:16:59.499 12:29:06 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.499 12:29:06 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75845 00:17:00.436 [2024-12-16 12:29:07.437262] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:00.436 [2024-12-16 12:29:07.446188] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:00.436 [2024-12-16 12:29:07.446204] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:01.368 [2024-12-16 12:29:08.446230] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:01.368 [2024-12-16 12:29:08.447211] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:01.368 [2024-12-16 12:29:08.447225] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:02.739 [2024-12-16 12:29:09.447241] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:02.739 [2024-12-16 12:29:09.455177] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:02.739 [2024-12-16 12:29:09.455193] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:02.739 [2024-12-16 12:29:09.455201] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:02.739 [2024-12-16 12:29:09.455272] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:24.694 [2024-12-16 12:29:30.927197] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:24.694 [2024-12-16 12:29:30.933666] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:24.694 [2024-12-16 12:29:30.941367] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:24.694 [2024-12-16 12:29:30.941387] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:51.227 00:17:51.227 fio_test: (groupid=0, jobs=1): err= 0: pid=75848: Mon Dec 16 12:29:55 2024 00:17:51.227 read: IOPS=13.3k, BW=52.0MiB/s (54.5MB/s)(3119MiB/60002msec) 00:17:51.227 slat (nsec): min=1287, max=379013, avg=5738.72, stdev=1537.37 00:17:51.227 clat (usec): min=1211, max=30542k, avg=4415.56, stdev=253450.03 00:17:51.227 lat (usec): min=1221, max=30542k, avg=4421.30, stdev=253450.03 00:17:51.227 clat percentiles (usec): 00:17:51.227 | 1.00th=[ 1909], 5.00th=[ 2057], 10.00th=[ 2089], 20.00th=[ 2114], 00:17:51.227 | 30.00th=[ 2147], 40.00th=[ 2147], 50.00th=[ 2180], 60.00th=[ 2180], 00:17:51.227 | 70.00th=[ 2212], 80.00th=[ 2212], 90.00th=[ 2311], 95.00th=[ 3326], 00:17:51.227 | 99.00th=[ 5473], 99.50th=[ 5932], 99.90th=[ 7767], 99.95th=[ 9241], 00:17:51.227 | 99.99th=[13435] 00:17:51.227 bw ( KiB/s): min=22720, max=113032, per=100.00%, avg=106565.97, stdev=16357.17, samples=59 00:17:51.227 iops : min= 5680, max=28258, avg=26641.49, stdev=4089.29, samples=59 00:17:51.227 write: IOPS=13.3k, BW=51.9MiB/s (54.4MB/s)(3114MiB/60002msec); 0 zone resets 00:17:51.227 slat (nsec): min=1453, max=207448, avg=5943.92, stdev=1505.87 00:17:51.227 clat (usec): min=1287, max=30542k, avg=5199.29, stdev=292231.20 00:17:51.227 lat (usec): min=1291, max=30542k, avg=5205.23, stdev=292231.20 00:17:51.227 clat percentiles (usec): 00:17:51.227 | 1.00th=[ 1958], 5.00th=[ 2147], 10.00th=[ 2180], 20.00th=[ 2212], 00:17:51.227 | 30.00th=[ 2245], 40.00th=[ 2245], 50.00th=[ 2278], 60.00th=[ 2278], 00:17:51.227 | 70.00th=[ 2311], 80.00th=[ 2343], 90.00th=[ 2409], 95.00th=[ 3294], 00:17:51.227 | 99.00th=[ 5604], 99.50th=[ 5997], 99.90th=[ 7767], 99.95th=[ 9372], 00:17:51.227 | 99.99th=[13435] 00:17:51.227 bw ( KiB/s): min=23072, max=113296, per=100.00%, avg=106391.32, stdev=16197.53, samples=59 00:17:51.227 iops : min= 5768, max=28324, avg=26597.83, stdev=4049.38, samples=59 00:17:51.227 lat (msec) : 2=1.81%, 4=94.86%, 10=3.28%, 20=0.04%, >=2000=0.01% 00:17:51.227 cpu : usr=2.85%, sys=16.15%, ctx=52498, majf=0, minf=13 00:17:51.227 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:51.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:51.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:51.227 issued rwts: total=798447,797126,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:51.227 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:51.227 00:17:51.227 Run status group 0 (all jobs): 00:17:51.227 READ: bw=52.0MiB/s (54.5MB/s), 52.0MiB/s-52.0MiB/s (54.5MB/s-54.5MB/s), io=3119MiB (3270MB), run=60002-60002msec 00:17:51.227 WRITE: bw=51.9MiB/s (54.4MB/s), 51.9MiB/s-51.9MiB/s (54.4MB/s-54.4MB/s), io=3114MiB (3265MB), run=60002-60002msec 00:17:51.227 00:17:51.227 Disk stats (read/write): 00:17:51.227 ublkb1: ios=795384/794067, merge=0/0, ticks=3471210/4019530, in_queue=7490741, util=99.91% 00:17:51.227 12:29:55 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:51.227 12:29:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.227 12:29:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:51.227 [2024-12-16 12:29:55.661118] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:51.227 [2024-12-16 12:29:55.694309] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:51.227 [2024-12-16 12:29:55.694556] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:51.227 [2024-12-16 12:29:55.702197] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:51.227 [2024-12-16 12:29:55.706262] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:51.227 [2024-12-16 12:29:55.706275] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:51.227 12:29:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.227 12:29:55 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:51.227 12:29:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.227 12:29:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:51.227 [2024-12-16 12:29:55.710315] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:51.227 [2024-12-16 12:29:55.717174] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:51.227 [2024-12-16 12:29:55.717226] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:51.227 12:29:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.227 12:29:55 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:51.227 12:29:55 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:51.227 12:29:55 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 75961 00:17:51.227 12:29:55 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 75961 ']' 00:17:51.227 12:29:55 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 75961 00:17:51.227 12:29:55 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:51.227 12:29:55 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.227 12:29:55 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75961 00:17:51.227 killing process with pid 75961 00:17:51.227 12:29:55 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:51.227 12:29:55 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:51.227 12:29:55 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75961' 00:17:51.227 12:29:55 ublk_recovery -- common/autotest_common.sh@973 -- # kill 75961 00:17:51.227 12:29:55 ublk_recovery -- common/autotest_common.sh@978 -- # wait 75961 00:17:51.227 [2024-12-16 12:29:56.820525] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:51.227 [2024-12-16 12:29:56.820577] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:51.227 ************************************ 00:17:51.227 END TEST ublk_recovery 00:17:51.227 ************************************ 00:17:51.227 00:17:51.227 real 1m4.551s 00:17:51.227 user 1m44.517s 00:17:51.227 sys 0m25.176s 00:17:51.227 12:29:57 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:51.227 12:29:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:51.227 12:29:57 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:51.227 12:29:57 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:51.227 12:29:57 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:51.227 12:29:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:51.227 12:29:57 -- common/autotest_common.sh@10 -- # set +x 00:17:51.227 12:29:57 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:51.227 12:29:57 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:51.227 12:29:57 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:51.227 12:29:57 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:51.227 12:29:57 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:51.227 12:29:57 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:51.227 12:29:57 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:51.227 12:29:57 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:51.227 12:29:57 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:51.227 12:29:57 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:51.227 12:29:57 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:51.227 12:29:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:51.227 12:29:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:51.227 12:29:57 -- common/autotest_common.sh@10 -- # set +x 00:17:51.228 ************************************ 00:17:51.228 START TEST ftl 00:17:51.228 ************************************ 00:17:51.228 12:29:57 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:51.228 * Looking for test storage... 00:17:51.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:51.228 12:29:57 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:51.228 12:29:57 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:17:51.228 12:29:57 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:51.228 12:29:57 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:51.228 12:29:57 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:51.228 12:29:57 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:51.228 12:29:57 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:51.228 12:29:57 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.228 12:29:57 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:51.228 12:29:57 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:51.228 12:29:57 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:51.228 12:29:57 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:51.228 12:29:57 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:51.228 12:29:57 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:51.228 12:29:57 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:51.228 12:29:57 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:51.228 12:29:57 ftl -- scripts/common.sh@345 -- # : 1 00:17:51.228 12:29:57 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:51.228 12:29:57 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.228 12:29:57 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:51.228 12:29:57 ftl -- scripts/common.sh@353 -- # local d=1 00:17:51.228 12:29:57 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.228 12:29:57 ftl -- scripts/common.sh@355 -- # echo 1 00:17:51.228 12:29:57 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:51.228 12:29:57 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:51.228 12:29:57 ftl -- scripts/common.sh@353 -- # local d=2 00:17:51.228 12:29:57 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.228 12:29:57 ftl -- scripts/common.sh@355 -- # echo 2 00:17:51.228 12:29:57 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:51.228 12:29:57 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:51.228 12:29:57 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:51.228 12:29:57 ftl -- scripts/common.sh@368 -- # return 0 00:17:51.228 12:29:57 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.228 12:29:57 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:51.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.228 --rc genhtml_branch_coverage=1 00:17:51.228 --rc genhtml_function_coverage=1 00:17:51.228 --rc genhtml_legend=1 00:17:51.228 --rc geninfo_all_blocks=1 00:17:51.228 --rc geninfo_unexecuted_blocks=1 00:17:51.228 00:17:51.228 ' 00:17:51.228 12:29:57 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:51.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.228 --rc genhtml_branch_coverage=1 00:17:51.228 --rc genhtml_function_coverage=1 00:17:51.228 --rc genhtml_legend=1 00:17:51.228 --rc geninfo_all_blocks=1 00:17:51.228 --rc geninfo_unexecuted_blocks=1 00:17:51.228 00:17:51.228 ' 00:17:51.228 12:29:57 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:51.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.228 --rc genhtml_branch_coverage=1 00:17:51.228 --rc genhtml_function_coverage=1 00:17:51.228 --rc genhtml_legend=1 00:17:51.228 --rc geninfo_all_blocks=1 00:17:51.228 --rc geninfo_unexecuted_blocks=1 00:17:51.228 00:17:51.228 ' 00:17:51.228 12:29:57 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:51.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.228 --rc genhtml_branch_coverage=1 00:17:51.228 --rc genhtml_function_coverage=1 00:17:51.228 --rc genhtml_legend=1 00:17:51.228 --rc geninfo_all_blocks=1 00:17:51.228 --rc geninfo_unexecuted_blocks=1 00:17:51.228 00:17:51.228 ' 00:17:51.228 12:29:57 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:51.228 12:29:57 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:51.228 12:29:57 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:51.228 12:29:57 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:51.228 12:29:57 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:51.228 12:29:57 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:51.228 12:29:57 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:51.228 12:29:57 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:51.228 12:29:57 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:51.228 12:29:57 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:51.228 12:29:57 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:51.228 12:29:57 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:51.228 12:29:57 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:51.228 12:29:57 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:51.228 12:29:57 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:51.228 12:29:57 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:51.228 12:29:57 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:51.228 12:29:57 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:51.228 12:29:57 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:51.228 12:29:57 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:51.228 12:29:57 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:51.228 12:29:57 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:51.228 12:29:57 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:51.228 12:29:57 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:51.228 12:29:57 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:51.228 12:29:57 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:51.228 12:29:57 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:51.228 12:29:57 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:51.228 12:29:57 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:51.228 12:29:57 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:51.228 12:29:57 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:51.228 12:29:57 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:51.228 12:29:57 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:51.228 12:29:57 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:51.228 12:29:57 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:51.228 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:51.228 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:51.228 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:51.228 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:51.228 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:51.228 12:29:58 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76766 00:17:51.228 12:29:58 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76766 00:17:51.228 12:29:58 ftl -- common/autotest_common.sh@835 -- # '[' -z 76766 ']' 00:17:51.228 12:29:58 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:51.228 12:29:58 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.228 12:29:58 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:51.228 12:29:58 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.228 12:29:58 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:51.228 12:29:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:51.488 [2024-12-16 12:29:58.393630] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:51.488 [2024-12-16 12:29:58.393892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76766 ] 00:17:51.488 [2024-12-16 12:29:58.547659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.749 [2024-12-16 12:29:58.639323] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.321 12:29:59 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:52.321 12:29:59 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:52.321 12:29:59 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:52.321 12:29:59 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:53.264 12:30:00 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:53.264 12:30:00 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:53.525 12:30:00 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:53.525 12:30:00 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:53.525 12:30:00 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:53.786 12:30:00 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:53.786 12:30:00 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:53.786 12:30:00 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:53.786 12:30:00 ftl -- ftl/ftl.sh@50 -- # break 00:17:53.786 12:30:00 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:53.786 12:30:00 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:53.786 12:30:00 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:53.786 12:30:00 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:54.046 12:30:00 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:54.046 12:30:00 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:54.046 12:30:00 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:54.046 12:30:00 ftl -- ftl/ftl.sh@63 -- # break 00:17:54.046 12:30:00 ftl -- ftl/ftl.sh@66 -- # killprocess 76766 00:17:54.046 12:30:00 ftl -- common/autotest_common.sh@954 -- # '[' -z 76766 ']' 00:17:54.046 12:30:00 ftl -- common/autotest_common.sh@958 -- # kill -0 76766 00:17:54.046 12:30:00 ftl -- common/autotest_common.sh@959 -- # uname 00:17:54.046 12:30:00 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.046 12:30:00 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76766 00:17:54.046 12:30:00 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:54.046 killing process with pid 76766 00:17:54.046 12:30:00 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:54.046 12:30:00 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76766' 00:17:54.046 12:30:00 ftl -- common/autotest_common.sh@973 -- # kill 76766 00:17:54.046 12:30:00 ftl -- common/autotest_common.sh@978 -- # wait 76766 00:17:55.462 12:30:02 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:55.462 12:30:02 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:55.462 12:30:02 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:55.462 12:30:02 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:55.462 12:30:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:55.462 ************************************ 00:17:55.462 START TEST ftl_fio_basic 00:17:55.462 ************************************ 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:55.462 * Looking for test storage... 00:17:55.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:55.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.462 --rc genhtml_branch_coverage=1 00:17:55.462 --rc genhtml_function_coverage=1 00:17:55.462 --rc genhtml_legend=1 00:17:55.462 --rc geninfo_all_blocks=1 00:17:55.462 --rc geninfo_unexecuted_blocks=1 00:17:55.462 00:17:55.462 ' 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:55.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.462 --rc genhtml_branch_coverage=1 00:17:55.462 --rc genhtml_function_coverage=1 00:17:55.462 --rc genhtml_legend=1 00:17:55.462 --rc geninfo_all_blocks=1 00:17:55.462 --rc geninfo_unexecuted_blocks=1 00:17:55.462 00:17:55.462 ' 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:55.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.462 --rc genhtml_branch_coverage=1 00:17:55.462 --rc genhtml_function_coverage=1 00:17:55.462 --rc genhtml_legend=1 00:17:55.462 --rc geninfo_all_blocks=1 00:17:55.462 --rc geninfo_unexecuted_blocks=1 00:17:55.462 00:17:55.462 ' 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:55.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.462 --rc genhtml_branch_coverage=1 00:17:55.462 --rc genhtml_function_coverage=1 00:17:55.462 --rc genhtml_legend=1 00:17:55.462 --rc geninfo_all_blocks=1 00:17:55.462 --rc geninfo_unexecuted_blocks=1 00:17:55.462 00:17:55.462 ' 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=76899 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 76899 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 76899 ']' 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.462 12:30:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:55.462 [2024-12-16 12:30:02.493603] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:55.462 [2024-12-16 12:30:02.493938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76899 ] 00:17:55.722 [2024-12-16 12:30:02.649245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:55.722 [2024-12-16 12:30:02.740621] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.722 [2024-12-16 12:30:02.740842] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.722 [2024-12-16 12:30:02.740842] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.293 12:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.293 12:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:56.293 12:30:03 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:56.293 12:30:03 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:56.293 12:30:03 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:56.293 12:30:03 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:56.293 12:30:03 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:56.293 12:30:03 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:56.554 12:30:03 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:56.554 12:30:03 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:56.554 12:30:03 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:56.554 12:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:56.554 12:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:56.554 12:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:56.554 12:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:56.554 12:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:56.815 12:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:56.815 { 00:17:56.815 "name": "nvme0n1", 00:17:56.815 "aliases": [ 00:17:56.815 "2516d92a-0edb-4c9b-a3ac-50c92741becb" 00:17:56.815 ], 00:17:56.815 "product_name": "NVMe disk", 00:17:56.815 "block_size": 4096, 00:17:56.815 "num_blocks": 1310720, 00:17:56.815 "uuid": "2516d92a-0edb-4c9b-a3ac-50c92741becb", 00:17:56.815 "numa_id": -1, 00:17:56.815 "assigned_rate_limits": { 00:17:56.815 "rw_ios_per_sec": 0, 00:17:56.815 "rw_mbytes_per_sec": 0, 00:17:56.815 "r_mbytes_per_sec": 0, 00:17:56.815 "w_mbytes_per_sec": 0 00:17:56.815 }, 00:17:56.815 "claimed": false, 00:17:56.815 "zoned": false, 00:17:56.815 "supported_io_types": { 00:17:56.815 "read": true, 00:17:56.815 "write": true, 00:17:56.815 "unmap": true, 00:17:56.815 "flush": true, 00:17:56.815 "reset": true, 00:17:56.815 "nvme_admin": true, 00:17:56.815 "nvme_io": true, 00:17:56.815 "nvme_io_md": false, 00:17:56.815 "write_zeroes": true, 00:17:56.815 "zcopy": false, 00:17:56.815 "get_zone_info": false, 00:17:56.815 "zone_management": false, 00:17:56.815 "zone_append": false, 00:17:56.815 "compare": true, 00:17:56.815 "compare_and_write": false, 00:17:56.815 "abort": true, 00:17:56.815 "seek_hole": false, 00:17:56.815 "seek_data": false, 00:17:56.815 "copy": true, 00:17:56.815 "nvme_iov_md": false 00:17:56.815 }, 00:17:56.815 "driver_specific": { 00:17:56.815 "nvme": [ 00:17:56.815 { 00:17:56.815 "pci_address": "0000:00:11.0", 00:17:56.815 "trid": { 00:17:56.815 "trtype": "PCIe", 00:17:56.815 "traddr": "0000:00:11.0" 00:17:56.815 }, 00:17:56.815 "ctrlr_data": { 00:17:56.815 "cntlid": 0, 00:17:56.815 "vendor_id": "0x1b36", 00:17:56.815 "model_number": "QEMU NVMe Ctrl", 00:17:56.815 "serial_number": "12341", 00:17:56.815 "firmware_revision": "8.0.0", 00:17:56.815 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:56.815 "oacs": { 00:17:56.815 "security": 0, 00:17:56.815 "format": 1, 00:17:56.815 "firmware": 0, 00:17:56.815 "ns_manage": 1 00:17:56.815 }, 00:17:56.815 "multi_ctrlr": false, 00:17:56.815 "ana_reporting": false 00:17:56.815 }, 00:17:56.815 "vs": { 00:17:56.815 "nvme_version": "1.4" 00:17:56.815 }, 00:17:56.815 "ns_data": { 00:17:56.815 "id": 1, 00:17:56.815 "can_share": false 00:17:56.815 } 00:17:56.815 } 00:17:56.815 ], 00:17:56.815 "mp_policy": "active_passive" 00:17:56.815 } 00:17:56.815 } 00:17:56.815 ]' 00:17:56.815 12:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:56.815 12:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:56.815 12:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:56.815 12:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:56.815 12:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:56.815 12:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:56.815 12:30:03 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:56.815 12:30:03 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:56.815 12:30:03 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:56.815 12:30:03 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:56.815 12:30:03 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:57.077 12:30:04 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:57.077 12:30:04 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:57.338 12:30:04 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=d6c99d8b-2b8c-4a2f-a902-2565327699ae 00:17:57.338 12:30:04 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d6c99d8b-2b8c-4a2f-a902-2565327699ae 00:17:57.338 12:30:04 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=c3d13e50-5f4e-4900-b46b-d613f132152b 00:17:57.599 12:30:04 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c3d13e50-5f4e-4900-b46b-d613f132152b 00:17:57.599 12:30:04 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:57.599 12:30:04 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:57.599 12:30:04 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=c3d13e50-5f4e-4900-b46b-d613f132152b 00:17:57.600 12:30:04 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:57.600 12:30:04 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size c3d13e50-5f4e-4900-b46b-d613f132152b 00:17:57.600 12:30:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=c3d13e50-5f4e-4900-b46b-d613f132152b 00:17:57.600 12:30:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:57.600 12:30:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:57.600 12:30:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:57.600 12:30:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c3d13e50-5f4e-4900-b46b-d613f132152b 00:17:57.600 12:30:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:57.600 { 00:17:57.600 "name": "c3d13e50-5f4e-4900-b46b-d613f132152b", 00:17:57.600 "aliases": [ 00:17:57.600 "lvs/nvme0n1p0" 00:17:57.600 ], 00:17:57.600 "product_name": "Logical Volume", 00:17:57.600 "block_size": 4096, 00:17:57.600 "num_blocks": 26476544, 00:17:57.600 "uuid": "c3d13e50-5f4e-4900-b46b-d613f132152b", 00:17:57.600 "assigned_rate_limits": { 00:17:57.600 "rw_ios_per_sec": 0, 00:17:57.600 "rw_mbytes_per_sec": 0, 00:17:57.600 "r_mbytes_per_sec": 0, 00:17:57.600 "w_mbytes_per_sec": 0 00:17:57.600 }, 00:17:57.600 "claimed": false, 00:17:57.600 "zoned": false, 00:17:57.600 "supported_io_types": { 00:17:57.600 "read": true, 00:17:57.600 "write": true, 00:17:57.600 "unmap": true, 00:17:57.600 "flush": false, 00:17:57.600 "reset": true, 00:17:57.600 "nvme_admin": false, 00:17:57.600 "nvme_io": false, 00:17:57.600 "nvme_io_md": false, 00:17:57.600 "write_zeroes": true, 00:17:57.600 "zcopy": false, 00:17:57.600 "get_zone_info": false, 00:17:57.600 "zone_management": false, 00:17:57.600 "zone_append": false, 00:17:57.600 "compare": false, 00:17:57.600 "compare_and_write": false, 00:17:57.600 "abort": false, 00:17:57.600 "seek_hole": true, 00:17:57.600 "seek_data": true, 00:17:57.600 "copy": false, 00:17:57.600 "nvme_iov_md": false 00:17:57.600 }, 00:17:57.600 "driver_specific": { 00:17:57.600 "lvol": { 00:17:57.600 "lvol_store_uuid": "d6c99d8b-2b8c-4a2f-a902-2565327699ae", 00:17:57.600 "base_bdev": "nvme0n1", 00:17:57.600 "thin_provision": true, 00:17:57.600 "num_allocated_clusters": 0, 00:17:57.600 "snapshot": false, 00:17:57.600 "clone": false, 00:17:57.600 "esnap_clone": false 00:17:57.600 } 00:17:57.600 } 00:17:57.600 } 00:17:57.600 ]' 00:17:57.600 12:30:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:57.600 12:30:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:57.600 12:30:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:57.861 12:30:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:57.861 12:30:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:57.861 12:30:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:57.861 12:30:04 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:57.861 12:30:04 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:57.861 12:30:04 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:57.861 12:30:04 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:57.861 12:30:04 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:58.122 12:30:04 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size c3d13e50-5f4e-4900-b46b-d613f132152b 00:17:58.122 12:30:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=c3d13e50-5f4e-4900-b46b-d613f132152b 00:17:58.122 12:30:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:58.122 12:30:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:58.122 12:30:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:58.122 12:30:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c3d13e50-5f4e-4900-b46b-d613f132152b 00:17:58.122 12:30:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:58.122 { 00:17:58.122 "name": "c3d13e50-5f4e-4900-b46b-d613f132152b", 00:17:58.122 "aliases": [ 00:17:58.122 "lvs/nvme0n1p0" 00:17:58.122 ], 00:17:58.122 "product_name": "Logical Volume", 00:17:58.122 "block_size": 4096, 00:17:58.122 "num_blocks": 26476544, 00:17:58.122 "uuid": "c3d13e50-5f4e-4900-b46b-d613f132152b", 00:17:58.122 "assigned_rate_limits": { 00:17:58.122 "rw_ios_per_sec": 0, 00:17:58.122 "rw_mbytes_per_sec": 0, 00:17:58.122 "r_mbytes_per_sec": 0, 00:17:58.122 "w_mbytes_per_sec": 0 00:17:58.122 }, 00:17:58.122 "claimed": false, 00:17:58.122 "zoned": false, 00:17:58.122 "supported_io_types": { 00:17:58.122 "read": true, 00:17:58.122 "write": true, 00:17:58.122 "unmap": true, 00:17:58.122 "flush": false, 00:17:58.122 "reset": true, 00:17:58.122 "nvme_admin": false, 00:17:58.122 "nvme_io": false, 00:17:58.122 "nvme_io_md": false, 00:17:58.122 "write_zeroes": true, 00:17:58.122 "zcopy": false, 00:17:58.122 "get_zone_info": false, 00:17:58.122 "zone_management": false, 00:17:58.122 "zone_append": false, 00:17:58.122 "compare": false, 00:17:58.122 "compare_and_write": false, 00:17:58.122 "abort": false, 00:17:58.122 "seek_hole": true, 00:17:58.122 "seek_data": true, 00:17:58.122 "copy": false, 00:17:58.122 "nvme_iov_md": false 00:17:58.122 }, 00:17:58.122 "driver_specific": { 00:17:58.122 "lvol": { 00:17:58.122 "lvol_store_uuid": "d6c99d8b-2b8c-4a2f-a902-2565327699ae", 00:17:58.122 "base_bdev": "nvme0n1", 00:17:58.122 "thin_provision": true, 00:17:58.122 "num_allocated_clusters": 0, 00:17:58.122 "snapshot": false, 00:17:58.122 "clone": false, 00:17:58.122 "esnap_clone": false 00:17:58.122 } 00:17:58.122 } 00:17:58.122 } 00:17:58.122 ]' 00:17:58.122 12:30:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:58.122 12:30:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:58.122 12:30:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:58.381 12:30:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:58.381 12:30:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:58.381 12:30:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:58.381 12:30:05 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:58.381 12:30:05 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:58.381 12:30:05 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:58.381 12:30:05 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:58.381 12:30:05 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:58.381 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:58.381 12:30:05 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size c3d13e50-5f4e-4900-b46b-d613f132152b 00:17:58.381 12:30:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=c3d13e50-5f4e-4900-b46b-d613f132152b 00:17:58.381 12:30:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:58.381 12:30:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:58.381 12:30:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:58.381 12:30:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c3d13e50-5f4e-4900-b46b-d613f132152b 00:17:58.639 12:30:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:58.639 { 00:17:58.639 "name": "c3d13e50-5f4e-4900-b46b-d613f132152b", 00:17:58.639 "aliases": [ 00:17:58.639 "lvs/nvme0n1p0" 00:17:58.639 ], 00:17:58.639 "product_name": "Logical Volume", 00:17:58.639 "block_size": 4096, 00:17:58.639 "num_blocks": 26476544, 00:17:58.639 "uuid": "c3d13e50-5f4e-4900-b46b-d613f132152b", 00:17:58.639 "assigned_rate_limits": { 00:17:58.639 "rw_ios_per_sec": 0, 00:17:58.639 "rw_mbytes_per_sec": 0, 00:17:58.639 "r_mbytes_per_sec": 0, 00:17:58.639 "w_mbytes_per_sec": 0 00:17:58.639 }, 00:17:58.639 "claimed": false, 00:17:58.639 "zoned": false, 00:17:58.639 "supported_io_types": { 00:17:58.639 "read": true, 00:17:58.639 "write": true, 00:17:58.639 "unmap": true, 00:17:58.639 "flush": false, 00:17:58.639 "reset": true, 00:17:58.639 "nvme_admin": false, 00:17:58.639 "nvme_io": false, 00:17:58.639 "nvme_io_md": false, 00:17:58.639 "write_zeroes": true, 00:17:58.639 "zcopy": false, 00:17:58.639 "get_zone_info": false, 00:17:58.639 "zone_management": false, 00:17:58.639 "zone_append": false, 00:17:58.639 "compare": false, 00:17:58.639 "compare_and_write": false, 00:17:58.639 "abort": false, 00:17:58.639 "seek_hole": true, 00:17:58.639 "seek_data": true, 00:17:58.639 "copy": false, 00:17:58.639 "nvme_iov_md": false 00:17:58.639 }, 00:17:58.639 "driver_specific": { 00:17:58.639 "lvol": { 00:17:58.639 "lvol_store_uuid": "d6c99d8b-2b8c-4a2f-a902-2565327699ae", 00:17:58.639 "base_bdev": "nvme0n1", 00:17:58.639 "thin_provision": true, 00:17:58.639 "num_allocated_clusters": 0, 00:17:58.639 "snapshot": false, 00:17:58.639 "clone": false, 00:17:58.639 "esnap_clone": false 00:17:58.639 } 00:17:58.639 } 00:17:58.639 } 00:17:58.639 ]' 00:17:58.639 12:30:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:58.639 12:30:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:58.639 12:30:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:58.639 12:30:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:58.639 12:30:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:58.639 12:30:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:58.639 12:30:05 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:58.639 12:30:05 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:58.639 12:30:05 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c3d13e50-5f4e-4900-b46b-d613f132152b -c nvc0n1p0 --l2p_dram_limit 60 00:17:58.898 [2024-12-16 12:30:05.907415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.898 [2024-12-16 12:30:05.907455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:58.898 [2024-12-16 12:30:05.907468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:58.898 [2024-12-16 12:30:05.907475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.898 [2024-12-16 12:30:05.907531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.898 [2024-12-16 12:30:05.907540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:58.898 [2024-12-16 12:30:05.907550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:58.898 [2024-12-16 12:30:05.907556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.898 [2024-12-16 12:30:05.907592] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:58.898 [2024-12-16 12:30:05.908154] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:58.898 [2024-12-16 12:30:05.908191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.898 [2024-12-16 12:30:05.908198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:58.898 [2024-12-16 12:30:05.908207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:17:58.898 [2024-12-16 12:30:05.908214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.898 [2024-12-16 12:30:05.908252] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID edf4b65f-3c32-4ce0-b4ed-610164469d81 00:17:58.898 [2024-12-16 12:30:05.909614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.898 [2024-12-16 12:30:05.909647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:58.898 [2024-12-16 12:30:05.909655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:58.898 [2024-12-16 12:30:05.909664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.898 [2024-12-16 12:30:05.916440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.898 [2024-12-16 12:30:05.916596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:58.898 [2024-12-16 12:30:05.916610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.697 ms 00:17:58.898 [2024-12-16 12:30:05.916620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.898 [2024-12-16 12:30:05.916711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.898 [2024-12-16 12:30:05.916721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:58.898 [2024-12-16 12:30:05.916729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:58.898 [2024-12-16 12:30:05.916740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.898 [2024-12-16 12:30:05.916793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.898 [2024-12-16 12:30:05.916804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:58.898 [2024-12-16 12:30:05.916812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:58.898 [2024-12-16 12:30:05.916820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.899 [2024-12-16 12:30:05.916846] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:58.899 [2024-12-16 12:30:05.920113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.899 [2024-12-16 12:30:05.920239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:58.899 [2024-12-16 12:30:05.920257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.269 ms 00:17:58.899 [2024-12-16 12:30:05.920266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.899 [2024-12-16 12:30:05.920308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.899 [2024-12-16 12:30:05.920316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:58.899 [2024-12-16 12:30:05.920324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:58.899 [2024-12-16 12:30:05.920330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.899 [2024-12-16 12:30:05.920355] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:58.899 [2024-12-16 12:30:05.920482] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:58.899 [2024-12-16 12:30:05.920496] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:58.899 [2024-12-16 12:30:05.920505] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:58.899 [2024-12-16 12:30:05.920516] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:58.899 [2024-12-16 12:30:05.920524] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:58.899 [2024-12-16 12:30:05.920532] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:58.899 [2024-12-16 12:30:05.920538] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:58.899 [2024-12-16 12:30:05.920545] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:58.899 [2024-12-16 12:30:05.920551] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:58.899 [2024-12-16 12:30:05.920559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.899 [2024-12-16 12:30:05.920567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:58.899 [2024-12-16 12:30:05.920575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:17:58.899 [2024-12-16 12:30:05.920580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.899 [2024-12-16 12:30:05.920663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.899 [2024-12-16 12:30:05.920671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:58.899 [2024-12-16 12:30:05.920678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:58.899 [2024-12-16 12:30:05.920685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.899 [2024-12-16 12:30:05.920796] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:58.899 [2024-12-16 12:30:05.920805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:58.899 [2024-12-16 12:30:05.920815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:58.899 [2024-12-16 12:30:05.920821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.899 [2024-12-16 12:30:05.920829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:58.899 [2024-12-16 12:30:05.920835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:58.899 [2024-12-16 12:30:05.920842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:58.899 [2024-12-16 12:30:05.920847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:58.899 [2024-12-16 12:30:05.920855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:58.899 [2024-12-16 12:30:05.920860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:58.899 [2024-12-16 12:30:05.920867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:58.899 [2024-12-16 12:30:05.920873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:58.899 [2024-12-16 12:30:05.920880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:58.899 [2024-12-16 12:30:05.920885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:58.899 [2024-12-16 12:30:05.920899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:58.899 [2024-12-16 12:30:05.920904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.899 [2024-12-16 12:30:05.920912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:58.899 [2024-12-16 12:30:05.920918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:58.899 [2024-12-16 12:30:05.920924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.899 [2024-12-16 12:30:05.920929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:58.899 [2024-12-16 12:30:05.920936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:58.899 [2024-12-16 12:30:05.920941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:58.899 [2024-12-16 12:30:05.920949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:58.899 [2024-12-16 12:30:05.920954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:58.899 [2024-12-16 12:30:05.920961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:58.899 [2024-12-16 12:30:05.920966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:58.899 [2024-12-16 12:30:05.920972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:58.899 [2024-12-16 12:30:05.920977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:58.899 [2024-12-16 12:30:05.920983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:58.899 [2024-12-16 12:30:05.920989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:58.899 [2024-12-16 12:30:05.920995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:58.899 [2024-12-16 12:30:05.921000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:58.899 [2024-12-16 12:30:05.921009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:58.899 [2024-12-16 12:30:05.921027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:58.899 [2024-12-16 12:30:05.921033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:58.899 [2024-12-16 12:30:05.921038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:58.899 [2024-12-16 12:30:05.921044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:58.899 [2024-12-16 12:30:05.921050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:58.899 [2024-12-16 12:30:05.921056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:58.899 [2024-12-16 12:30:05.921061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.899 [2024-12-16 12:30:05.921068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:58.899 [2024-12-16 12:30:05.921073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:58.899 [2024-12-16 12:30:05.921079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.899 [2024-12-16 12:30:05.921084] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:58.899 [2024-12-16 12:30:05.921092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:58.899 [2024-12-16 12:30:05.921098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:58.899 [2024-12-16 12:30:05.921106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.899 [2024-12-16 12:30:05.921113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:58.899 [2024-12-16 12:30:05.921121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:58.899 [2024-12-16 12:30:05.921127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:58.899 [2024-12-16 12:30:05.921134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:58.899 [2024-12-16 12:30:05.921139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:58.899 [2024-12-16 12:30:05.921145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:58.899 [2024-12-16 12:30:05.921152] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:58.899 [2024-12-16 12:30:05.921175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:58.899 [2024-12-16 12:30:05.921182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:58.899 [2024-12-16 12:30:05.921189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:58.899 [2024-12-16 12:30:05.921196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:58.899 [2024-12-16 12:30:05.921203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:58.899 [2024-12-16 12:30:05.921209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:58.899 [2024-12-16 12:30:05.921217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:58.899 [2024-12-16 12:30:05.921223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:58.899 [2024-12-16 12:30:05.921230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:58.899 [2024-12-16 12:30:05.921236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:58.899 [2024-12-16 12:30:05.921245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:58.899 [2024-12-16 12:30:05.921250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:58.899 [2024-12-16 12:30:05.921258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:58.899 [2024-12-16 12:30:05.921263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:58.899 [2024-12-16 12:30:05.921270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:58.899 [2024-12-16 12:30:05.921276] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:58.899 [2024-12-16 12:30:05.921284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:58.899 [2024-12-16 12:30:05.921291] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:58.899 [2024-12-16 12:30:05.921298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:58.899 [2024-12-16 12:30:05.921304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:58.900 [2024-12-16 12:30:05.921312] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:58.900 [2024-12-16 12:30:05.921318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.900 [2024-12-16 12:30:05.921325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:58.900 [2024-12-16 12:30:05.921331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:17:58.900 [2024-12-16 12:30:05.921339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.900 [2024-12-16 12:30:05.921392] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:58.900 [2024-12-16 12:30:05.921437] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:01.430 [2024-12-16 12:30:08.157284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.430 [2024-12-16 12:30:08.157333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:01.430 [2024-12-16 12:30:08.157346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2235.884 ms 00:18:01.430 [2024-12-16 12:30:08.157357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.430 [2024-12-16 12:30:08.185246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.431 [2024-12-16 12:30:08.185426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:01.431 [2024-12-16 12:30:08.185444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.673 ms 00:18:01.431 [2024-12-16 12:30:08.185454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.431 [2024-12-16 12:30:08.185593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.431 [2024-12-16 12:30:08.185605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:01.431 [2024-12-16 12:30:08.185614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:01.431 [2024-12-16 12:30:08.185627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.431 [2024-12-16 12:30:08.238409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.431 [2024-12-16 12:30:08.238451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:01.431 [2024-12-16 12:30:08.238466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.741 ms 00:18:01.431 [2024-12-16 12:30:08.238477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.431 [2024-12-16 12:30:08.238517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.431 [2024-12-16 12:30:08.238528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:01.431 [2024-12-16 12:30:08.238537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:01.431 [2024-12-16 12:30:08.238546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.431 [2024-12-16 12:30:08.239008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.431 [2024-12-16 12:30:08.239028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:01.431 [2024-12-16 12:30:08.239038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:18:01.431 [2024-12-16 12:30:08.239050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.431 [2024-12-16 12:30:08.239190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.431 [2024-12-16 12:30:08.239207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:01.431 [2024-12-16 12:30:08.239216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:18:01.431 [2024-12-16 12:30:08.239228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.431 [2024-12-16 12:30:08.255331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.431 [2024-12-16 12:30:08.255363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:01.431 [2024-12-16 12:30:08.255373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.074 ms 00:18:01.431 [2024-12-16 12:30:08.255382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.431 [2024-12-16 12:30:08.267583] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:01.431 [2024-12-16 12:30:08.284896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.431 [2024-12-16 12:30:08.285083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:01.431 [2024-12-16 12:30:08.285103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.417 ms 00:18:01.431 [2024-12-16 12:30:08.285114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.431 [2024-12-16 12:30:08.341733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.431 [2024-12-16 12:30:08.341774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:01.431 [2024-12-16 12:30:08.341793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.583 ms 00:18:01.431 [2024-12-16 12:30:08.341801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.431 [2024-12-16 12:30:08.341992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.431 [2024-12-16 12:30:08.342004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:01.431 [2024-12-16 12:30:08.342017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:18:01.431 [2024-12-16 12:30:08.342025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.431 [2024-12-16 12:30:08.365010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.431 [2024-12-16 12:30:08.365044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:01.431 [2024-12-16 12:30:08.365058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.920 ms 00:18:01.431 [2024-12-16 12:30:08.365065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.431 [2024-12-16 12:30:08.387443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.431 [2024-12-16 12:30:08.387582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:01.431 [2024-12-16 12:30:08.387602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.332 ms 00:18:01.431 [2024-12-16 12:30:08.387610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.431 [2024-12-16 12:30:08.388209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.431 [2024-12-16 12:30:08.388227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:01.431 [2024-12-16 12:30:08.388239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:18:01.431 [2024-12-16 12:30:08.388247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.431 [2024-12-16 12:30:08.467480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.431 [2024-12-16 12:30:08.467519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:01.431 [2024-12-16 12:30:08.467536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.187 ms 00:18:01.431 [2024-12-16 12:30:08.467547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.431 [2024-12-16 12:30:08.492131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.431 [2024-12-16 12:30:08.492180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:01.431 [2024-12-16 12:30:08.492195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.496 ms 00:18:01.431 [2024-12-16 12:30:08.492203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.431 [2024-12-16 12:30:08.515205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.431 [2024-12-16 12:30:08.515234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:01.431 [2024-12-16 12:30:08.515246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.956 ms 00:18:01.431 [2024-12-16 12:30:08.515253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.690 [2024-12-16 12:30:08.538886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.690 [2024-12-16 12:30:08.539030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:01.690 [2024-12-16 12:30:08.539050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.584 ms 00:18:01.690 [2024-12-16 12:30:08.539058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.690 [2024-12-16 12:30:08.539109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.690 [2024-12-16 12:30:08.539118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:01.690 [2024-12-16 12:30:08.539133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:01.690 [2024-12-16 12:30:08.539141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.690 [2024-12-16 12:30:08.539246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.690 [2024-12-16 12:30:08.539257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:01.690 [2024-12-16 12:30:08.539267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:01.690 [2024-12-16 12:30:08.539275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.690 [2024-12-16 12:30:08.540282] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2632.390 ms, result 0 00:18:01.690 { 00:18:01.690 "name": "ftl0", 00:18:01.690 "uuid": "edf4b65f-3c32-4ce0-b4ed-610164469d81" 00:18:01.690 } 00:18:01.690 12:30:08 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:01.690 12:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:01.690 12:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:01.690 12:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:18:01.690 12:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:01.690 12:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:01.690 12:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:01.690 12:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:01.948 [ 00:18:01.948 { 00:18:01.948 "name": "ftl0", 00:18:01.948 "aliases": [ 00:18:01.948 "edf4b65f-3c32-4ce0-b4ed-610164469d81" 00:18:01.948 ], 00:18:01.948 "product_name": "FTL disk", 00:18:01.948 "block_size": 4096, 00:18:01.948 "num_blocks": 20971520, 00:18:01.948 "uuid": "edf4b65f-3c32-4ce0-b4ed-610164469d81", 00:18:01.948 "assigned_rate_limits": { 00:18:01.948 "rw_ios_per_sec": 0, 00:18:01.948 "rw_mbytes_per_sec": 0, 00:18:01.948 "r_mbytes_per_sec": 0, 00:18:01.948 "w_mbytes_per_sec": 0 00:18:01.948 }, 00:18:01.948 "claimed": false, 00:18:01.948 "zoned": false, 00:18:01.948 "supported_io_types": { 00:18:01.948 "read": true, 00:18:01.948 "write": true, 00:18:01.948 "unmap": true, 00:18:01.948 "flush": true, 00:18:01.948 "reset": false, 00:18:01.948 "nvme_admin": false, 00:18:01.948 "nvme_io": false, 00:18:01.948 "nvme_io_md": false, 00:18:01.948 "write_zeroes": true, 00:18:01.948 "zcopy": false, 00:18:01.948 "get_zone_info": false, 00:18:01.948 "zone_management": false, 00:18:01.948 "zone_append": false, 00:18:01.948 "compare": false, 00:18:01.948 "compare_and_write": false, 00:18:01.948 "abort": false, 00:18:01.948 "seek_hole": false, 00:18:01.948 "seek_data": false, 00:18:01.948 "copy": false, 00:18:01.948 "nvme_iov_md": false 00:18:01.948 }, 00:18:01.948 "driver_specific": { 00:18:01.948 "ftl": { 00:18:01.948 "base_bdev": "c3d13e50-5f4e-4900-b46b-d613f132152b", 00:18:01.948 "cache": "nvc0n1p0" 00:18:01.948 } 00:18:01.948 } 00:18:01.948 } 00:18:01.948 ] 00:18:01.948 12:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:18:01.948 12:30:08 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:01.948 12:30:08 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:02.206 12:30:09 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:02.206 12:30:09 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:02.466 [2024-12-16 12:30:09.357251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.466 [2024-12-16 12:30:09.357376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:02.466 [2024-12-16 12:30:09.357393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:02.466 [2024-12-16 12:30:09.357407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.466 [2024-12-16 12:30:09.357441] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:02.466 [2024-12-16 12:30:09.359683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.466 [2024-12-16 12:30:09.359710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:02.466 [2024-12-16 12:30:09.359721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.226 ms 00:18:02.466 [2024-12-16 12:30:09.359727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.466 [2024-12-16 12:30:09.360122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.466 [2024-12-16 12:30:09.360137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:02.466 [2024-12-16 12:30:09.360146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:18:02.466 [2024-12-16 12:30:09.360152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.466 [2024-12-16 12:30:09.362646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.466 [2024-12-16 12:30:09.362666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:02.466 [2024-12-16 12:30:09.362676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.456 ms 00:18:02.466 [2024-12-16 12:30:09.362683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.466 [2024-12-16 12:30:09.367334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.466 [2024-12-16 12:30:09.367357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:02.466 [2024-12-16 12:30:09.367366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.625 ms 00:18:02.466 [2024-12-16 12:30:09.367372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.466 [2024-12-16 12:30:09.385750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.466 [2024-12-16 12:30:09.385780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:02.466 [2024-12-16 12:30:09.385804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.303 ms 00:18:02.466 [2024-12-16 12:30:09.385810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.466 [2024-12-16 12:30:09.398294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.466 [2024-12-16 12:30:09.398322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:02.466 [2024-12-16 12:30:09.398336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.446 ms 00:18:02.466 [2024-12-16 12:30:09.398342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.466 [2024-12-16 12:30:09.398497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.466 [2024-12-16 12:30:09.398507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:02.466 [2024-12-16 12:30:09.398515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:18:02.466 [2024-12-16 12:30:09.398521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.466 [2024-12-16 12:30:09.416261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.466 [2024-12-16 12:30:09.416373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:02.466 [2024-12-16 12:30:09.416389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.717 ms 00:18:02.466 [2024-12-16 12:30:09.416395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.466 [2024-12-16 12:30:09.433785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.466 [2024-12-16 12:30:09.433810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:02.466 [2024-12-16 12:30:09.433820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.356 ms 00:18:02.466 [2024-12-16 12:30:09.433826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.466 [2024-12-16 12:30:09.450966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.466 [2024-12-16 12:30:09.450992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:02.466 [2024-12-16 12:30:09.451002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.099 ms 00:18:02.466 [2024-12-16 12:30:09.451007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.466 [2024-12-16 12:30:09.468055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.466 [2024-12-16 12:30:09.468080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:02.466 [2024-12-16 12:30:09.468090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.954 ms 00:18:02.466 [2024-12-16 12:30:09.468096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.466 [2024-12-16 12:30:09.468134] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:02.466 [2024-12-16 12:30:09.468146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:02.466 [2024-12-16 12:30:09.468348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:02.467 [2024-12-16 12:30:09.468866] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:02.467 [2024-12-16 12:30:09.468874] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: edf4b65f-3c32-4ce0-b4ed-610164469d81 00:18:02.467 [2024-12-16 12:30:09.468880] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:02.467 [2024-12-16 12:30:09.468889] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:02.467 [2024-12-16 12:30:09.468895] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:02.467 [2024-12-16 12:30:09.468904] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:02.467 [2024-12-16 12:30:09.468909] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:02.467 [2024-12-16 12:30:09.468916] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:02.467 [2024-12-16 12:30:09.468923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:02.467 [2024-12-16 12:30:09.468929] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:02.467 [2024-12-16 12:30:09.468934] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:02.467 [2024-12-16 12:30:09.468941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.467 [2024-12-16 12:30:09.468947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:02.467 [2024-12-16 12:30:09.468956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:18:02.467 [2024-12-16 12:30:09.468961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.467 [2024-12-16 12:30:09.479057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.467 [2024-12-16 12:30:09.479177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:02.467 [2024-12-16 12:30:09.479194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.060 ms 00:18:02.467 [2024-12-16 12:30:09.479200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.468 [2024-12-16 12:30:09.479498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.468 [2024-12-16 12:30:09.479511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:02.468 [2024-12-16 12:30:09.479520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:18:02.468 [2024-12-16 12:30:09.479526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.468 [2024-12-16 12:30:09.516204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.468 [2024-12-16 12:30:09.516231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:02.468 [2024-12-16 12:30:09.516241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.468 [2024-12-16 12:30:09.516247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.468 [2024-12-16 12:30:09.516300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.468 [2024-12-16 12:30:09.516308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:02.468 [2024-12-16 12:30:09.516316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.468 [2024-12-16 12:30:09.516322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.468 [2024-12-16 12:30:09.516394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.468 [2024-12-16 12:30:09.516405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:02.468 [2024-12-16 12:30:09.516413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.468 [2024-12-16 12:30:09.516419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.468 [2024-12-16 12:30:09.516443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.468 [2024-12-16 12:30:09.516449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:02.468 [2024-12-16 12:30:09.516456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.468 [2024-12-16 12:30:09.516463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.726 [2024-12-16 12:30:09.583355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.726 [2024-12-16 12:30:09.583393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:02.726 [2024-12-16 12:30:09.583404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.726 [2024-12-16 12:30:09.583411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.726 [2024-12-16 12:30:09.634410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.726 [2024-12-16 12:30:09.634447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:02.726 [2024-12-16 12:30:09.634458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.726 [2024-12-16 12:30:09.634465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.726 [2024-12-16 12:30:09.634552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.726 [2024-12-16 12:30:09.634562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:02.726 [2024-12-16 12:30:09.634573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.726 [2024-12-16 12:30:09.634579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.727 [2024-12-16 12:30:09.634641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.727 [2024-12-16 12:30:09.634649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:02.727 [2024-12-16 12:30:09.634657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.727 [2024-12-16 12:30:09.634664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.727 [2024-12-16 12:30:09.634755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.727 [2024-12-16 12:30:09.634763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:02.727 [2024-12-16 12:30:09.634771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.727 [2024-12-16 12:30:09.634780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.727 [2024-12-16 12:30:09.634823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.727 [2024-12-16 12:30:09.634831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:02.727 [2024-12-16 12:30:09.634839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.727 [2024-12-16 12:30:09.634844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.727 [2024-12-16 12:30:09.634884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.727 [2024-12-16 12:30:09.634892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:02.727 [2024-12-16 12:30:09.634901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.727 [2024-12-16 12:30:09.634907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.727 [2024-12-16 12:30:09.634958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.727 [2024-12-16 12:30:09.634965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:02.727 [2024-12-16 12:30:09.634973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.727 [2024-12-16 12:30:09.634979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.727 [2024-12-16 12:30:09.635128] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 277.841 ms, result 0 00:18:02.727 true 00:18:02.727 12:30:09 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 76899 00:18:02.727 12:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 76899 ']' 00:18:02.727 12:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 76899 00:18:02.727 12:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:02.727 12:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:02.727 12:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76899 00:18:02.727 killing process with pid 76899 00:18:02.727 12:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:02.727 12:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:02.727 12:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76899' 00:18:02.727 12:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 76899 00:18:02.727 12:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 76899 00:18:09.291 12:30:15 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:09.291 12:30:15 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:09.291 12:30:15 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:09.291 12:30:15 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:09.291 12:30:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:09.291 12:30:15 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:09.291 12:30:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:09.291 12:30:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:09.291 12:30:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:09.291 12:30:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:09.291 12:30:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:09.291 12:30:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:09.291 12:30:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:09.291 12:30:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:09.291 12:30:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:09.291 12:30:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:09.291 12:30:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:09.291 12:30:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:09.291 12:30:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:09.291 12:30:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:09.291 12:30:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:09.291 12:30:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:09.291 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:09.291 fio-3.35 00:18:09.291 Starting 1 thread 00:18:14.580 00:18:14.580 test: (groupid=0, jobs=1): err= 0: pid=77076: Mon Dec 16 12:30:21 2024 00:18:14.580 read: IOPS=794, BW=52.8MiB/s (55.3MB/s)(255MiB/4823msec) 00:18:14.580 slat (usec): min=2, max=116, avg= 5.07, stdev= 3.21 00:18:14.580 clat (usec): min=257, max=1394, avg=568.27, stdev=164.38 00:18:14.580 lat (usec): min=260, max=1409, avg=573.35, stdev=164.86 00:18:14.580 clat percentiles (usec): 00:18:14.580 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 359], 20.00th=[ 482], 00:18:14.580 | 30.00th=[ 515], 40.00th=[ 529], 50.00th=[ 537], 60.00th=[ 545], 00:18:14.580 | 70.00th=[ 570], 80.00th=[ 619], 90.00th=[ 848], 95.00th=[ 914], 00:18:14.580 | 99.00th=[ 1020], 99.50th=[ 1156], 99.90th=[ 1237], 99.95th=[ 1254], 00:18:14.580 | 99.99th=[ 1401] 00:18:14.580 write: IOPS=800, BW=53.1MiB/s (55.7MB/s)(256MiB/4818msec); 0 zone resets 00:18:14.580 slat (nsec): min=13549, max=74925, avg=23776.59, stdev=6886.60 00:18:14.580 clat (usec): min=305, max=2119, avg=643.70, stdev=186.60 00:18:14.580 lat (usec): min=332, max=2141, avg=667.47, stdev=187.29 00:18:14.580 clat percentiles (usec): 00:18:14.580 | 1.00th=[ 334], 5.00th=[ 343], 10.00th=[ 416], 20.00th=[ 537], 00:18:14.580 | 30.00th=[ 570], 40.00th=[ 611], 50.00th=[ 619], 60.00th=[ 627], 00:18:14.580 | 70.00th=[ 644], 80.00th=[ 709], 90.00th=[ 938], 95.00th=[ 996], 00:18:14.580 | 99.00th=[ 1205], 99.50th=[ 1287], 99.90th=[ 1680], 99.95th=[ 1942], 00:18:14.580 | 99.99th=[ 2114] 00:18:14.580 bw ( KiB/s): min=48144, max=65552, per=99.49%, avg=54143.11, stdev=5854.10, samples=9 00:18:14.580 iops : min= 708, max= 964, avg=796.22, stdev=86.09, samples=9 00:18:14.581 lat (usec) : 500=19.77%, 750=63.18%, 1000=14.05% 00:18:14.581 lat (msec) : 2=2.99%, 4=0.01% 00:18:14.581 cpu : usr=99.11%, sys=0.08%, ctx=15, majf=0, minf=1167 00:18:14.581 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:14.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.581 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.581 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:14.581 00:18:14.581 Run status group 0 (all jobs): 00:18:14.581 READ: bw=52.8MiB/s (55.3MB/s), 52.8MiB/s-52.8MiB/s (55.3MB/s-55.3MB/s), io=255MiB (267MB), run=4823-4823msec 00:18:14.581 WRITE: bw=53.1MiB/s (55.7MB/s), 53.1MiB/s-53.1MiB/s (55.7MB/s-55.7MB/s), io=256MiB (269MB), run=4818-4818msec 00:18:15.965 ----------------------------------------------------- 00:18:15.965 Suppressions used: 00:18:15.965 count bytes template 00:18:15.965 1 5 /usr/src/fio/parse.c 00:18:15.965 1 8 libtcmalloc_minimal.so 00:18:15.965 1 904 libcrypto.so 00:18:15.965 ----------------------------------------------------- 00:18:15.965 00:18:15.965 12:30:22 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:15.965 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:15.965 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:15.965 12:30:22 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:15.965 12:30:22 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:15.965 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:15.965 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:15.965 12:30:22 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:15.965 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:15.965 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:15.965 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:15.965 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:15.965 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:15.965 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:15.965 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:15.965 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:15.965 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:15.965 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:15.965 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:15.965 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:15.965 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:15.965 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:15.965 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:15.965 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:15.965 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:15.965 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:15.965 fio-3.35 00:18:15.965 Starting 2 threads 00:18:42.519 00:18:42.519 first_half: (groupid=0, jobs=1): err= 0: pid=77190: Mon Dec 16 12:30:45 2024 00:18:42.519 read: IOPS=3082, BW=12.0MiB/s (12.6MB/s)(256MiB/21241msec) 00:18:42.519 slat (usec): min=3, max=140, avg= 3.98, stdev= 1.04 00:18:42.519 clat (usec): min=449, max=360340, avg=35139.60, stdev=22242.06 00:18:42.519 lat (usec): min=452, max=360345, avg=35143.58, stdev=22242.26 00:18:42.519 clat percentiles (msec): 00:18:42.519 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 27], 00:18:42.519 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:18:42.519 | 70.00th=[ 32], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 67], 00:18:42.519 | 99.00th=[ 146], 99.50th=[ 155], 99.90th=[ 247], 99.95th=[ 321], 00:18:42.519 | 99.99th=[ 355] 00:18:42.519 write: IOPS=3091, BW=12.1MiB/s (12.7MB/s)(256MiB/21197msec); 0 zone resets 00:18:42.519 slat (usec): min=3, max=558, avg= 5.43, stdev= 4.54 00:18:42.519 clat (usec): min=336, max=40495, avg=6357.12, stdev=6602.55 00:18:42.519 lat (usec): min=343, max=40500, avg=6362.55, stdev=6602.66 00:18:42.519 clat percentiles (usec): 00:18:42.519 | 1.00th=[ 644], 5.00th=[ 783], 10.00th=[ 938], 20.00th=[ 2180], 00:18:42.519 | 30.00th=[ 2835], 40.00th=[ 3589], 50.00th=[ 4293], 60.00th=[ 4883], 00:18:42.519 | 70.00th=[ 5604], 80.00th=[ 9503], 90.00th=[15533], 95.00th=[23200], 00:18:42.519 | 99.00th=[30278], 99.50th=[31589], 99.90th=[34866], 99.95th=[38536], 00:18:42.519 | 99.99th=[39584] 00:18:42.519 bw ( KiB/s): min= 128, max=62744, per=100.00%, avg=24802.67, stdev=16929.00, samples=21 00:18:42.519 iops : min= 32, max=15686, avg=6200.67, stdev=4232.25, samples=21 00:18:42.519 lat (usec) : 500=0.02%, 750=1.97%, 1000=3.71% 00:18:42.519 lat (msec) : 2=3.39%, 4=13.99%, 10=18.49%, 20=6.94%, 50=48.27% 00:18:42.519 lat (msec) : 100=1.60%, 250=1.58%, 500=0.05% 00:18:42.519 cpu : usr=99.34%, sys=0.14%, ctx=53, majf=0, minf=5592 00:18:42.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:42.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.519 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:42.520 issued rwts: total=65468,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:42.520 second_half: (groupid=0, jobs=1): err= 0: pid=77191: Mon Dec 16 12:30:45 2024 00:18:42.520 read: IOPS=3112, BW=12.2MiB/s (12.7MB/s)(256MiB/21041msec) 00:18:42.520 slat (nsec): min=3201, max=69549, avg=5667.62, stdev=1073.52 00:18:42.520 clat (msec): min=10, max=297, avg=35.16, stdev=18.22 00:18:42.520 lat (msec): min=10, max=297, avg=35.16, stdev=18.22 00:18:42.520 clat percentiles (msec): 00:18:42.520 | 1.00th=[ 26], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 28], 00:18:42.520 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:18:42.520 | 70.00th=[ 33], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 61], 00:18:42.520 | 99.00th=[ 133], 99.50th=[ 142], 99.90th=[ 161], 99.95th=[ 169], 00:18:42.520 | 99.99th=[ 234] 00:18:42.520 write: IOPS=3130, BW=12.2MiB/s (12.8MB/s)(256MiB/20936msec); 0 zone resets 00:18:42.520 slat (usec): min=3, max=1895, avg= 6.72, stdev=12.18 00:18:42.520 clat (usec): min=332, max=33654, avg=5943.49, stdev=4697.33 00:18:42.520 lat (usec): min=341, max=33660, avg=5950.21, stdev=4697.95 00:18:42.520 clat percentiles (usec): 00:18:42.520 | 1.00th=[ 783], 5.00th=[ 1401], 10.00th=[ 2278], 20.00th=[ 2900], 00:18:42.520 | 30.00th=[ 3523], 40.00th=[ 4113], 50.00th=[ 4817], 60.00th=[ 5276], 00:18:42.520 | 70.00th=[ 5735], 80.00th=[ 6259], 90.00th=[13435], 95.00th=[16319], 00:18:42.520 | 99.00th=[24249], 99.50th=[26608], 99.90th=[30278], 99.95th=[31589], 00:18:42.520 | 99.99th=[32113] 00:18:42.520 bw ( KiB/s): min= 2928, max=47552, per=96.35%, avg=23831.27, stdev=14501.52, samples=22 00:18:42.520 iops : min= 732, max=11888, avg=5957.82, stdev=3625.38, samples=22 00:18:42.520 lat (usec) : 500=0.04%, 750=0.39%, 1000=0.86% 00:18:42.520 lat (msec) : 2=2.59%, 4=15.40%, 10=23.11%, 20=6.72%, 50=47.85% 00:18:42.520 lat (msec) : 100=1.62%, 250=1.42%, 500=0.01% 00:18:42.520 cpu : usr=99.23%, sys=0.14%, ctx=33, majf=0, minf=5519 00:18:42.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:42.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.520 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:42.520 issued rwts: total=65488,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:42.520 00:18:42.520 Run status group 0 (all jobs): 00:18:42.520 READ: bw=24.1MiB/s (25.3MB/s), 12.0MiB/s-12.2MiB/s (12.6MB/s-12.7MB/s), io=512MiB (536MB), run=21041-21241msec 00:18:42.520 WRITE: bw=24.2MiB/s (25.3MB/s), 12.1MiB/s-12.2MiB/s (12.7MB/s-12.8MB/s), io=512MiB (537MB), run=20936-21197msec 00:18:42.520 ----------------------------------------------------- 00:18:42.520 Suppressions used: 00:18:42.520 count bytes template 00:18:42.520 2 10 /usr/src/fio/parse.c 00:18:42.520 3 288 /usr/src/fio/iolog.c 00:18:42.520 1 8 libtcmalloc_minimal.so 00:18:42.520 1 904 libcrypto.so 00:18:42.520 ----------------------------------------------------- 00:18:42.520 00:18:42.520 12:30:47 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:42.520 12:30:47 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:42.520 12:30:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:42.520 12:30:47 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:42.520 12:30:47 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:42.520 12:30:47 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:42.520 12:30:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:42.520 12:30:47 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:42.520 12:30:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:42.520 12:30:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:42.520 12:30:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:42.520 12:30:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:42.520 12:30:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:42.520 12:30:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:42.520 12:30:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:42.520 12:30:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:42.520 12:30:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:42.520 12:30:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:42.520 12:30:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:42.520 12:30:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:42.520 12:30:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:42.520 12:30:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:42.520 12:30:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:42.520 12:30:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:42.520 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:42.520 fio-3.35 00:18:42.520 Starting 1 thread 00:18:54.726 00:18:54.726 test: (groupid=0, jobs=1): err= 0: pid=77476: Mon Dec 16 12:31:01 2024 00:18:54.726 read: IOPS=8239, BW=32.2MiB/s (33.8MB/s)(255MiB/7913msec) 00:18:54.726 slat (nsec): min=3173, max=27237, avg=4912.87, stdev=1069.70 00:18:54.726 clat (usec): min=555, max=33104, avg=15525.28, stdev=1785.95 00:18:54.726 lat (usec): min=559, max=33108, avg=15530.19, stdev=1786.00 00:18:54.726 clat percentiles (usec): 00:18:54.726 | 1.00th=[13829], 5.00th=[13960], 10.00th=[14091], 20.00th=[14353], 00:18:54.726 | 30.00th=[14615], 40.00th=[14746], 50.00th=[15401], 60.00th=[15664], 00:18:54.726 | 70.00th=[15926], 80.00th=[16188], 90.00th=[16450], 95.00th=[17957], 00:18:54.726 | 99.00th=[23725], 99.50th=[24773], 99.90th=[27657], 99.95th=[29230], 00:18:54.726 | 99.99th=[32375] 00:18:54.726 write: IOPS=13.0k, BW=50.6MiB/s (53.1MB/s)(256MiB/5060msec); 0 zone resets 00:18:54.726 slat (usec): min=4, max=673, avg= 7.14, stdev= 4.39 00:18:54.726 clat (usec): min=450, max=46629, avg=9837.40, stdev=10406.54 00:18:54.726 lat (usec): min=456, max=46634, avg=9844.55, stdev=10406.75 00:18:54.726 clat percentiles (usec): 00:18:54.726 | 1.00th=[ 594], 5.00th=[ 685], 10.00th=[ 742], 20.00th=[ 840], 00:18:54.726 | 30.00th=[ 1029], 40.00th=[ 1467], 50.00th=[ 5014], 60.00th=[10683], 00:18:54.726 | 70.00th=[14484], 80.00th=[17957], 90.00th=[27657], 95.00th=[30540], 00:18:54.726 | 99.00th=[35914], 99.50th=[36963], 99.90th=[39060], 99.95th=[40109], 00:18:54.726 | 99.99th=[45351] 00:18:54.726 bw ( KiB/s): min= 5512, max=87368, per=92.00%, avg=47662.55, stdev=21527.80, samples=11 00:18:54.726 iops : min= 1378, max=21842, avg=11915.64, stdev=5381.95, samples=11 00:18:54.726 lat (usec) : 500=0.01%, 750=5.39%, 1000=9.00% 00:18:54.726 lat (msec) : 2=6.16%, 4=0.92%, 10=7.94%, 20=60.48%, 50=10.10% 00:18:54.726 cpu : usr=99.07%, sys=0.19%, ctx=36, majf=0, minf=5563 00:18:54.726 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:54.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.726 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:54.726 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.726 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:54.726 00:18:54.726 Run status group 0 (all jobs): 00:18:54.726 READ: bw=32.2MiB/s (33.8MB/s), 32.2MiB/s-32.2MiB/s (33.8MB/s-33.8MB/s), io=255MiB (267MB), run=7913-7913msec 00:18:54.726 WRITE: bw=50.6MiB/s (53.1MB/s), 50.6MiB/s-50.6MiB/s (53.1MB/s-53.1MB/s), io=256MiB (268MB), run=5060-5060msec 00:18:56.111 ----------------------------------------------------- 00:18:56.111 Suppressions used: 00:18:56.111 count bytes template 00:18:56.111 1 5 /usr/src/fio/parse.c 00:18:56.111 2 192 /usr/src/fio/iolog.c 00:18:56.111 1 8 libtcmalloc_minimal.so 00:18:56.111 1 904 libcrypto.so 00:18:56.111 ----------------------------------------------------- 00:18:56.111 00:18:56.111 12:31:03 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:56.111 12:31:03 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:56.111 12:31:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:56.111 12:31:03 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:56.111 Remove shared memory files 00:18:56.111 12:31:03 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:56.111 12:31:03 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:56.111 12:31:03 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:56.111 12:31:03 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:56.111 12:31:03 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58942 /dev/shm/spdk_tgt_trace.pid75810 00:18:56.111 12:31:03 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:56.111 12:31:03 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:56.111 ************************************ 00:18:56.111 END TEST ftl_fio_basic 00:18:56.111 ************************************ 00:18:56.111 00:18:56.111 real 1m0.855s 00:18:56.111 user 2m8.601s 00:18:56.111 sys 0m2.762s 00:18:56.111 12:31:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.111 12:31:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:56.111 12:31:03 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:56.111 12:31:03 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:56.111 12:31:03 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.111 12:31:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:56.111 ************************************ 00:18:56.111 START TEST ftl_bdevperf 00:18:56.111 ************************************ 00:18:56.111 12:31:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:56.111 * Looking for test storage... 00:18:56.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:56.112 12:31:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:56.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.372 --rc genhtml_branch_coverage=1 00:18:56.372 --rc genhtml_function_coverage=1 00:18:56.372 --rc genhtml_legend=1 00:18:56.372 --rc geninfo_all_blocks=1 00:18:56.372 --rc geninfo_unexecuted_blocks=1 00:18:56.372 00:18:56.372 ' 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:56.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.372 --rc genhtml_branch_coverage=1 00:18:56.372 --rc genhtml_function_coverage=1 00:18:56.372 --rc genhtml_legend=1 00:18:56.372 --rc geninfo_all_blocks=1 00:18:56.372 --rc geninfo_unexecuted_blocks=1 00:18:56.372 00:18:56.372 ' 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:56.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.372 --rc genhtml_branch_coverage=1 00:18:56.372 --rc genhtml_function_coverage=1 00:18:56.372 --rc genhtml_legend=1 00:18:56.372 --rc geninfo_all_blocks=1 00:18:56.372 --rc geninfo_unexecuted_blocks=1 00:18:56.372 00:18:56.372 ' 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:56.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.372 --rc genhtml_branch_coverage=1 00:18:56.372 --rc genhtml_function_coverage=1 00:18:56.372 --rc genhtml_legend=1 00:18:56.372 --rc geninfo_all_blocks=1 00:18:56.372 --rc geninfo_unexecuted_blocks=1 00:18:56.372 00:18:56.372 ' 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77703 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77703 00:18:56.372 12:31:03 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77703 ']' 00:18:56.373 12:31:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:56.373 12:31:03 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.373 12:31:03 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.373 12:31:03 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.373 12:31:03 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.373 12:31:03 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:56.373 [2024-12-16 12:31:03.377407] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:56.373 [2024-12-16 12:31:03.378113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77703 ] 00:18:56.633 [2024-12-16 12:31:03.535303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.633 [2024-12-16 12:31:03.626216] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.204 12:31:04 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.204 12:31:04 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:18:57.204 12:31:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:57.204 12:31:04 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:57.204 12:31:04 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:57.204 12:31:04 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:57.204 12:31:04 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:57.204 12:31:04 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:57.472 12:31:04 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:57.472 12:31:04 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:57.472 12:31:04 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:57.472 12:31:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:57.472 12:31:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:57.472 12:31:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:57.472 12:31:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:57.472 12:31:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:57.773 12:31:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:57.773 { 00:18:57.773 "name": "nvme0n1", 00:18:57.773 "aliases": [ 00:18:57.773 "a11498aa-5dad-4fda-818a-90f809853c33" 00:18:57.773 ], 00:18:57.773 "product_name": "NVMe disk", 00:18:57.773 "block_size": 4096, 00:18:57.773 "num_blocks": 1310720, 00:18:57.773 "uuid": "a11498aa-5dad-4fda-818a-90f809853c33", 00:18:57.773 "numa_id": -1, 00:18:57.773 "assigned_rate_limits": { 00:18:57.773 "rw_ios_per_sec": 0, 00:18:57.773 "rw_mbytes_per_sec": 0, 00:18:57.773 "r_mbytes_per_sec": 0, 00:18:57.773 "w_mbytes_per_sec": 0 00:18:57.773 }, 00:18:57.773 "claimed": true, 00:18:57.773 "claim_type": "read_many_write_one", 00:18:57.773 "zoned": false, 00:18:57.773 "supported_io_types": { 00:18:57.773 "read": true, 00:18:57.773 "write": true, 00:18:57.773 "unmap": true, 00:18:57.773 "flush": true, 00:18:57.773 "reset": true, 00:18:57.773 "nvme_admin": true, 00:18:57.773 "nvme_io": true, 00:18:57.773 "nvme_io_md": false, 00:18:57.773 "write_zeroes": true, 00:18:57.773 "zcopy": false, 00:18:57.773 "get_zone_info": false, 00:18:57.773 "zone_management": false, 00:18:57.773 "zone_append": false, 00:18:57.773 "compare": true, 00:18:57.773 "compare_and_write": false, 00:18:57.773 "abort": true, 00:18:57.773 "seek_hole": false, 00:18:57.773 "seek_data": false, 00:18:57.773 "copy": true, 00:18:57.773 "nvme_iov_md": false 00:18:57.773 }, 00:18:57.773 "driver_specific": { 00:18:57.773 "nvme": [ 00:18:57.773 { 00:18:57.773 "pci_address": "0000:00:11.0", 00:18:57.773 "trid": { 00:18:57.773 "trtype": "PCIe", 00:18:57.773 "traddr": "0000:00:11.0" 00:18:57.773 }, 00:18:57.773 "ctrlr_data": { 00:18:57.773 "cntlid": 0, 00:18:57.773 "vendor_id": "0x1b36", 00:18:57.773 "model_number": "QEMU NVMe Ctrl", 00:18:57.773 "serial_number": "12341", 00:18:57.773 "firmware_revision": "8.0.0", 00:18:57.773 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:57.773 "oacs": { 00:18:57.773 "security": 0, 00:18:57.773 "format": 1, 00:18:57.773 "firmware": 0, 00:18:57.773 "ns_manage": 1 00:18:57.773 }, 00:18:57.773 "multi_ctrlr": false, 00:18:57.773 "ana_reporting": false 00:18:57.773 }, 00:18:57.773 "vs": { 00:18:57.773 "nvme_version": "1.4" 00:18:57.773 }, 00:18:57.773 "ns_data": { 00:18:57.773 "id": 1, 00:18:57.773 "can_share": false 00:18:57.773 } 00:18:57.773 } 00:18:57.773 ], 00:18:57.773 "mp_policy": "active_passive" 00:18:57.773 } 00:18:57.773 } 00:18:57.773 ]' 00:18:57.773 12:31:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:57.773 12:31:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:57.773 12:31:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:57.773 12:31:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:57.773 12:31:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:57.773 12:31:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:18:57.773 12:31:04 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:57.773 12:31:04 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:57.773 12:31:04 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:57.773 12:31:04 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:57.773 12:31:04 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:58.049 12:31:04 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=d6c99d8b-2b8c-4a2f-a902-2565327699ae 00:18:58.049 12:31:04 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:58.049 12:31:04 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d6c99d8b-2b8c-4a2f-a902-2565327699ae 00:18:58.309 12:31:05 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:58.309 12:31:05 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=15410f5c-a6d1-412b-b655-793df453aa91 00:18:58.309 12:31:05 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 15410f5c-a6d1-412b-b655-793df453aa91 00:18:58.569 12:31:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=226ed7df-9455-4cab-8c79-a7315956b22a 00:18:58.569 12:31:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 226ed7df-9455-4cab-8c79-a7315956b22a 00:18:58.569 12:31:05 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:58.569 12:31:05 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:58.569 12:31:05 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=226ed7df-9455-4cab-8c79-a7315956b22a 00:18:58.569 12:31:05 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:58.569 12:31:05 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 226ed7df-9455-4cab-8c79-a7315956b22a 00:18:58.569 12:31:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=226ed7df-9455-4cab-8c79-a7315956b22a 00:18:58.569 12:31:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:58.569 12:31:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:58.569 12:31:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:58.569 12:31:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 226ed7df-9455-4cab-8c79-a7315956b22a 00:18:58.829 12:31:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:58.829 { 00:18:58.829 "name": "226ed7df-9455-4cab-8c79-a7315956b22a", 00:18:58.829 "aliases": [ 00:18:58.829 "lvs/nvme0n1p0" 00:18:58.829 ], 00:18:58.829 "product_name": "Logical Volume", 00:18:58.829 "block_size": 4096, 00:18:58.829 "num_blocks": 26476544, 00:18:58.829 "uuid": "226ed7df-9455-4cab-8c79-a7315956b22a", 00:18:58.829 "assigned_rate_limits": { 00:18:58.829 "rw_ios_per_sec": 0, 00:18:58.829 "rw_mbytes_per_sec": 0, 00:18:58.829 "r_mbytes_per_sec": 0, 00:18:58.829 "w_mbytes_per_sec": 0 00:18:58.829 }, 00:18:58.829 "claimed": false, 00:18:58.829 "zoned": false, 00:18:58.829 "supported_io_types": { 00:18:58.829 "read": true, 00:18:58.829 "write": true, 00:18:58.829 "unmap": true, 00:18:58.829 "flush": false, 00:18:58.829 "reset": true, 00:18:58.829 "nvme_admin": false, 00:18:58.829 "nvme_io": false, 00:18:58.829 "nvme_io_md": false, 00:18:58.829 "write_zeroes": true, 00:18:58.829 "zcopy": false, 00:18:58.829 "get_zone_info": false, 00:18:58.829 "zone_management": false, 00:18:58.829 "zone_append": false, 00:18:58.829 "compare": false, 00:18:58.829 "compare_and_write": false, 00:18:58.829 "abort": false, 00:18:58.829 "seek_hole": true, 00:18:58.829 "seek_data": true, 00:18:58.829 "copy": false, 00:18:58.829 "nvme_iov_md": false 00:18:58.829 }, 00:18:58.829 "driver_specific": { 00:18:58.829 "lvol": { 00:18:58.829 "lvol_store_uuid": "15410f5c-a6d1-412b-b655-793df453aa91", 00:18:58.829 "base_bdev": "nvme0n1", 00:18:58.829 "thin_provision": true, 00:18:58.829 "num_allocated_clusters": 0, 00:18:58.829 "snapshot": false, 00:18:58.829 "clone": false, 00:18:58.829 "esnap_clone": false 00:18:58.829 } 00:18:58.829 } 00:18:58.829 } 00:18:58.829 ]' 00:18:58.829 12:31:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:58.829 12:31:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:58.829 12:31:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:58.829 12:31:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:58.829 12:31:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:58.829 12:31:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:58.830 12:31:05 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:58.830 12:31:05 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:58.830 12:31:05 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:59.090 12:31:06 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:59.090 12:31:06 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:59.090 12:31:06 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 226ed7df-9455-4cab-8c79-a7315956b22a 00:18:59.090 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=226ed7df-9455-4cab-8c79-a7315956b22a 00:18:59.090 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:59.090 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:59.090 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:59.090 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 226ed7df-9455-4cab-8c79-a7315956b22a 00:18:59.350 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:59.350 { 00:18:59.350 "name": "226ed7df-9455-4cab-8c79-a7315956b22a", 00:18:59.350 "aliases": [ 00:18:59.350 "lvs/nvme0n1p0" 00:18:59.350 ], 00:18:59.350 "product_name": "Logical Volume", 00:18:59.350 "block_size": 4096, 00:18:59.350 "num_blocks": 26476544, 00:18:59.350 "uuid": "226ed7df-9455-4cab-8c79-a7315956b22a", 00:18:59.350 "assigned_rate_limits": { 00:18:59.350 "rw_ios_per_sec": 0, 00:18:59.350 "rw_mbytes_per_sec": 0, 00:18:59.350 "r_mbytes_per_sec": 0, 00:18:59.350 "w_mbytes_per_sec": 0 00:18:59.350 }, 00:18:59.350 "claimed": false, 00:18:59.350 "zoned": false, 00:18:59.350 "supported_io_types": { 00:18:59.350 "read": true, 00:18:59.350 "write": true, 00:18:59.350 "unmap": true, 00:18:59.350 "flush": false, 00:18:59.350 "reset": true, 00:18:59.350 "nvme_admin": false, 00:18:59.350 "nvme_io": false, 00:18:59.350 "nvme_io_md": false, 00:18:59.350 "write_zeroes": true, 00:18:59.350 "zcopy": false, 00:18:59.350 "get_zone_info": false, 00:18:59.350 "zone_management": false, 00:18:59.350 "zone_append": false, 00:18:59.350 "compare": false, 00:18:59.350 "compare_and_write": false, 00:18:59.350 "abort": false, 00:18:59.350 "seek_hole": true, 00:18:59.350 "seek_data": true, 00:18:59.350 "copy": false, 00:18:59.350 "nvme_iov_md": false 00:18:59.350 }, 00:18:59.350 "driver_specific": { 00:18:59.350 "lvol": { 00:18:59.350 "lvol_store_uuid": "15410f5c-a6d1-412b-b655-793df453aa91", 00:18:59.350 "base_bdev": "nvme0n1", 00:18:59.350 "thin_provision": true, 00:18:59.350 "num_allocated_clusters": 0, 00:18:59.350 "snapshot": false, 00:18:59.350 "clone": false, 00:18:59.350 "esnap_clone": false 00:18:59.350 } 00:18:59.350 } 00:18:59.350 } 00:18:59.350 ]' 00:18:59.350 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:59.350 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:59.350 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:59.350 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:59.350 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:59.350 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:59.350 12:31:06 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:59.350 12:31:06 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:59.611 12:31:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:18:59.611 12:31:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 226ed7df-9455-4cab-8c79-a7315956b22a 00:18:59.611 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=226ed7df-9455-4cab-8c79-a7315956b22a 00:18:59.611 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:59.611 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:59.611 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:59.611 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 226ed7df-9455-4cab-8c79-a7315956b22a 00:18:59.871 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:59.871 { 00:18:59.871 "name": "226ed7df-9455-4cab-8c79-a7315956b22a", 00:18:59.871 "aliases": [ 00:18:59.871 "lvs/nvme0n1p0" 00:18:59.871 ], 00:18:59.871 "product_name": "Logical Volume", 00:18:59.871 "block_size": 4096, 00:18:59.871 "num_blocks": 26476544, 00:18:59.871 "uuid": "226ed7df-9455-4cab-8c79-a7315956b22a", 00:18:59.871 "assigned_rate_limits": { 00:18:59.871 "rw_ios_per_sec": 0, 00:18:59.871 "rw_mbytes_per_sec": 0, 00:18:59.871 "r_mbytes_per_sec": 0, 00:18:59.871 "w_mbytes_per_sec": 0 00:18:59.871 }, 00:18:59.871 "claimed": false, 00:18:59.871 "zoned": false, 00:18:59.871 "supported_io_types": { 00:18:59.871 "read": true, 00:18:59.871 "write": true, 00:18:59.871 "unmap": true, 00:18:59.871 "flush": false, 00:18:59.871 "reset": true, 00:18:59.871 "nvme_admin": false, 00:18:59.871 "nvme_io": false, 00:18:59.871 "nvme_io_md": false, 00:18:59.871 "write_zeroes": true, 00:18:59.871 "zcopy": false, 00:18:59.871 "get_zone_info": false, 00:18:59.871 "zone_management": false, 00:18:59.871 "zone_append": false, 00:18:59.871 "compare": false, 00:18:59.871 "compare_and_write": false, 00:18:59.871 "abort": false, 00:18:59.871 "seek_hole": true, 00:18:59.871 "seek_data": true, 00:18:59.871 "copy": false, 00:18:59.871 "nvme_iov_md": false 00:18:59.871 }, 00:18:59.871 "driver_specific": { 00:18:59.871 "lvol": { 00:18:59.871 "lvol_store_uuid": "15410f5c-a6d1-412b-b655-793df453aa91", 00:18:59.871 "base_bdev": "nvme0n1", 00:18:59.871 "thin_provision": true, 00:18:59.871 "num_allocated_clusters": 0, 00:18:59.871 "snapshot": false, 00:18:59.871 "clone": false, 00:18:59.871 "esnap_clone": false 00:18:59.871 } 00:18:59.871 } 00:18:59.871 } 00:18:59.871 ]' 00:18:59.871 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:59.871 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:59.871 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:59.871 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:59.871 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:59.871 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:59.871 12:31:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:18:59.871 12:31:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 226ed7df-9455-4cab-8c79-a7315956b22a -c nvc0n1p0 --l2p_dram_limit 20 00:19:00.131 [2024-12-16 12:31:07.067932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.131 [2024-12-16 12:31:07.067983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:00.131 [2024-12-16 12:31:07.067997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:00.131 [2024-12-16 12:31:07.068007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.131 [2024-12-16 12:31:07.068045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.131 [2024-12-16 12:31:07.068056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:00.131 [2024-12-16 12:31:07.068063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:00.131 [2024-12-16 12:31:07.068072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.131 [2024-12-16 12:31:07.068086] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:00.131 [2024-12-16 12:31:07.068609] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:00.131 [2024-12-16 12:31:07.068624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.131 [2024-12-16 12:31:07.068632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:00.131 [2024-12-16 12:31:07.068639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:19:00.131 [2024-12-16 12:31:07.068647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.131 [2024-12-16 12:31:07.068668] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d5268d61-5635-41a1-8313-c6ad29f3de84 00:19:00.131 [2024-12-16 12:31:07.069950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.131 [2024-12-16 12:31:07.070122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:00.131 [2024-12-16 12:31:07.070142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:00.131 [2024-12-16 12:31:07.070150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.131 [2024-12-16 12:31:07.077110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.131 [2024-12-16 12:31:07.077228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:00.131 [2024-12-16 12:31:07.077244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.886 ms 00:19:00.131 [2024-12-16 12:31:07.077253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.131 [2024-12-16 12:31:07.077324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.131 [2024-12-16 12:31:07.077332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:00.131 [2024-12-16 12:31:07.077343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:00.131 [2024-12-16 12:31:07.077349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.131 [2024-12-16 12:31:07.077386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.131 [2024-12-16 12:31:07.077411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:00.131 [2024-12-16 12:31:07.077420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:00.131 [2024-12-16 12:31:07.077425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.131 [2024-12-16 12:31:07.077443] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:00.131 [2024-12-16 12:31:07.080702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.131 [2024-12-16 12:31:07.080801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:00.131 [2024-12-16 12:31:07.080812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.266 ms 00:19:00.131 [2024-12-16 12:31:07.080823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.131 [2024-12-16 12:31:07.080850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.131 [2024-12-16 12:31:07.080858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:00.131 [2024-12-16 12:31:07.080864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:00.131 [2024-12-16 12:31:07.080872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.131 [2024-12-16 12:31:07.080883] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:00.131 [2024-12-16 12:31:07.081001] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:00.131 [2024-12-16 12:31:07.081011] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:00.131 [2024-12-16 12:31:07.081021] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:00.131 [2024-12-16 12:31:07.081029] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:00.132 [2024-12-16 12:31:07.081039] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:00.132 [2024-12-16 12:31:07.081046] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:00.132 [2024-12-16 12:31:07.081053] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:00.132 [2024-12-16 12:31:07.081059] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:00.132 [2024-12-16 12:31:07.081066] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:00.132 [2024-12-16 12:31:07.081074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.132 [2024-12-16 12:31:07.081082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:00.132 [2024-12-16 12:31:07.081088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:19:00.132 [2024-12-16 12:31:07.081095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.132 [2024-12-16 12:31:07.081175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.132 [2024-12-16 12:31:07.081188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:00.132 [2024-12-16 12:31:07.081195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:00.132 [2024-12-16 12:31:07.081204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.132 [2024-12-16 12:31:07.081273] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:00.132 [2024-12-16 12:31:07.081285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:00.132 [2024-12-16 12:31:07.081292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:00.132 [2024-12-16 12:31:07.081300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.132 [2024-12-16 12:31:07.081306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:00.132 [2024-12-16 12:31:07.081313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:00.132 [2024-12-16 12:31:07.081318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:00.132 [2024-12-16 12:31:07.081327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:00.132 [2024-12-16 12:31:07.081333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:00.132 [2024-12-16 12:31:07.081341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:00.132 [2024-12-16 12:31:07.081346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:00.132 [2024-12-16 12:31:07.081358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:00.132 [2024-12-16 12:31:07.081364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:00.132 [2024-12-16 12:31:07.081371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:00.132 [2024-12-16 12:31:07.081377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:00.132 [2024-12-16 12:31:07.081385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.132 [2024-12-16 12:31:07.081398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:00.132 [2024-12-16 12:31:07.081407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:00.132 [2024-12-16 12:31:07.081412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.132 [2024-12-16 12:31:07.081419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:00.132 [2024-12-16 12:31:07.081424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:00.132 [2024-12-16 12:31:07.081431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:00.132 [2024-12-16 12:31:07.081437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:00.132 [2024-12-16 12:31:07.081444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:00.132 [2024-12-16 12:31:07.081449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:00.132 [2024-12-16 12:31:07.081456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:00.132 [2024-12-16 12:31:07.081461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:00.132 [2024-12-16 12:31:07.081468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:00.132 [2024-12-16 12:31:07.081473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:00.132 [2024-12-16 12:31:07.081480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:00.132 [2024-12-16 12:31:07.081485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:00.132 [2024-12-16 12:31:07.081494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:00.132 [2024-12-16 12:31:07.081500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:00.132 [2024-12-16 12:31:07.081507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:00.132 [2024-12-16 12:31:07.081512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:00.132 [2024-12-16 12:31:07.081519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:00.132 [2024-12-16 12:31:07.081524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:00.132 [2024-12-16 12:31:07.081530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:00.132 [2024-12-16 12:31:07.081536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:00.132 [2024-12-16 12:31:07.081542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.132 [2024-12-16 12:31:07.081547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:00.132 [2024-12-16 12:31:07.081555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:00.132 [2024-12-16 12:31:07.081561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.132 [2024-12-16 12:31:07.081567] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:00.132 [2024-12-16 12:31:07.081574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:00.132 [2024-12-16 12:31:07.081581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:00.132 [2024-12-16 12:31:07.081587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.132 [2024-12-16 12:31:07.081597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:00.132 [2024-12-16 12:31:07.081602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:00.132 [2024-12-16 12:31:07.081610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:00.132 [2024-12-16 12:31:07.081616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:00.132 [2024-12-16 12:31:07.081622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:00.132 [2024-12-16 12:31:07.081628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:00.132 [2024-12-16 12:31:07.081636] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:00.132 [2024-12-16 12:31:07.081644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:00.132 [2024-12-16 12:31:07.081652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:00.132 [2024-12-16 12:31:07.081659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:00.132 [2024-12-16 12:31:07.081666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:00.132 [2024-12-16 12:31:07.081672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:00.132 [2024-12-16 12:31:07.081679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:00.132 [2024-12-16 12:31:07.081686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:00.132 [2024-12-16 12:31:07.081693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:00.132 [2024-12-16 12:31:07.081698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:00.132 [2024-12-16 12:31:07.081707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:00.132 [2024-12-16 12:31:07.081712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:00.132 [2024-12-16 12:31:07.081720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:00.132 [2024-12-16 12:31:07.081726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:00.132 [2024-12-16 12:31:07.081733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:00.132 [2024-12-16 12:31:07.081739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:00.132 [2024-12-16 12:31:07.081745] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:00.132 [2024-12-16 12:31:07.081751] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:00.132 [2024-12-16 12:31:07.081760] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:00.132 [2024-12-16 12:31:07.081766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:00.132 [2024-12-16 12:31:07.081773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:00.132 [2024-12-16 12:31:07.081779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:00.132 [2024-12-16 12:31:07.081787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.132 [2024-12-16 12:31:07.081793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:00.132 [2024-12-16 12:31:07.081800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:19:00.132 [2024-12-16 12:31:07.081805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.132 [2024-12-16 12:31:07.081848] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:00.132 [2024-12-16 12:31:07.081856] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:03.431 [2024-12-16 12:31:10.347107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.431 [2024-12-16 12:31:10.347262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:03.431 [2024-12-16 12:31:10.347283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3265.249 ms 00:19:03.431 [2024-12-16 12:31:10.347290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.432 [2024-12-16 12:31:10.371009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.432 [2024-12-16 12:31:10.371138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:03.432 [2024-12-16 12:31:10.371168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.547 ms 00:19:03.432 [2024-12-16 12:31:10.371176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.432 [2024-12-16 12:31:10.371266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.432 [2024-12-16 12:31:10.371275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:03.432 [2024-12-16 12:31:10.371286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:03.432 [2024-12-16 12:31:10.371292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.432 [2024-12-16 12:31:10.412030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.432 [2024-12-16 12:31:10.412064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:03.432 [2024-12-16 12:31:10.412077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.708 ms 00:19:03.432 [2024-12-16 12:31:10.412083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.432 [2024-12-16 12:31:10.412116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.432 [2024-12-16 12:31:10.412123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:03.432 [2024-12-16 12:31:10.412132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:03.432 [2024-12-16 12:31:10.412139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.432 [2024-12-16 12:31:10.412580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.432 [2024-12-16 12:31:10.412600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:03.432 [2024-12-16 12:31:10.412609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:19:03.432 [2024-12-16 12:31:10.412615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.432 [2024-12-16 12:31:10.412703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.432 [2024-12-16 12:31:10.412711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:03.432 [2024-12-16 12:31:10.412721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:03.432 [2024-12-16 12:31:10.412727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.432 [2024-12-16 12:31:10.424698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.432 [2024-12-16 12:31:10.424724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:03.432 [2024-12-16 12:31:10.424734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.955 ms 00:19:03.432 [2024-12-16 12:31:10.424746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.432 [2024-12-16 12:31:10.434581] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:03.432 [2024-12-16 12:31:10.440167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.432 [2024-12-16 12:31:10.440193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:03.432 [2024-12-16 12:31:10.440201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.368 ms 00:19:03.432 [2024-12-16 12:31:10.440209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.432 [2024-12-16 12:31:10.515209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.432 [2024-12-16 12:31:10.515242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:03.432 [2024-12-16 12:31:10.515252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.982 ms 00:19:03.432 [2024-12-16 12:31:10.515260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.432 [2024-12-16 12:31:10.515406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.432 [2024-12-16 12:31:10.515418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:03.432 [2024-12-16 12:31:10.515426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:19:03.432 [2024-12-16 12:31:10.515436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.432 [2024-12-16 12:31:10.534096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.432 [2024-12-16 12:31:10.534232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:03.432 [2024-12-16 12:31:10.534246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.620 ms 00:19:03.432 [2024-12-16 12:31:10.534254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.692 [2024-12-16 12:31:10.552245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.692 [2024-12-16 12:31:10.552274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:03.692 [2024-12-16 12:31:10.552283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.966 ms 00:19:03.692 [2024-12-16 12:31:10.552291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.692 [2024-12-16 12:31:10.552729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.692 [2024-12-16 12:31:10.552746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:03.693 [2024-12-16 12:31:10.552754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:19:03.693 [2024-12-16 12:31:10.552762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.693 [2024-12-16 12:31:10.615072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.693 [2024-12-16 12:31:10.615105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:03.693 [2024-12-16 12:31:10.615114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.278 ms 00:19:03.693 [2024-12-16 12:31:10.615122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.693 [2024-12-16 12:31:10.635197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.693 [2024-12-16 12:31:10.635230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:03.693 [2024-12-16 12:31:10.635241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.011 ms 00:19:03.693 [2024-12-16 12:31:10.635250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.693 [2024-12-16 12:31:10.653443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.693 [2024-12-16 12:31:10.653565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:03.693 [2024-12-16 12:31:10.653578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.166 ms 00:19:03.693 [2024-12-16 12:31:10.653585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.693 [2024-12-16 12:31:10.672775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.693 [2024-12-16 12:31:10.672881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:03.693 [2024-12-16 12:31:10.672893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.165 ms 00:19:03.693 [2024-12-16 12:31:10.672901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.693 [2024-12-16 12:31:10.672927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.693 [2024-12-16 12:31:10.672938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:03.693 [2024-12-16 12:31:10.672946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:03.693 [2024-12-16 12:31:10.672954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.693 [2024-12-16 12:31:10.673020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.693 [2024-12-16 12:31:10.673029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:03.693 [2024-12-16 12:31:10.673036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:03.693 [2024-12-16 12:31:10.673044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.693 [2024-12-16 12:31:10.673976] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3605.672 ms, result 0 00:19:03.693 { 00:19:03.693 "name": "ftl0", 00:19:03.693 "uuid": "d5268d61-5635-41a1-8313-c6ad29f3de84" 00:19:03.693 } 00:19:03.693 12:31:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:03.693 12:31:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:03.693 12:31:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:03.955 12:31:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:03.955 [2024-12-16 12:31:10.982028] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:03.955 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:03.955 Zero copy mechanism will not be used. 00:19:03.955 Running I/O for 4 seconds... 00:19:06.277 744.00 IOPS, 49.41 MiB/s [2024-12-16T12:31:14.322Z] 840.50 IOPS, 55.81 MiB/s [2024-12-16T12:31:15.266Z] 853.67 IOPS, 56.69 MiB/s [2024-12-16T12:31:15.266Z] 862.00 IOPS, 57.24 MiB/s 00:19:08.160 Latency(us) 00:19:08.160 [2024-12-16T12:31:15.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.160 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:08.160 ftl0 : 4.00 861.94 57.24 0.00 0.00 1229.62 299.32 2520.62 00:19:08.160 [2024-12-16T12:31:15.266Z] =================================================================================================================== 00:19:08.160 [2024-12-16T12:31:15.266Z] Total : 861.94 57.24 0.00 0.00 1229.62 299.32 2520.62 00:19:08.160 [2024-12-16 12:31:14.989511] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:08.160 { 00:19:08.160 "results": [ 00:19:08.160 { 00:19:08.160 "job": "ftl0", 00:19:08.160 "core_mask": "0x1", 00:19:08.160 "workload": "randwrite", 00:19:08.160 "status": "finished", 00:19:08.160 "queue_depth": 1, 00:19:08.160 "io_size": 69632, 00:19:08.160 "runtime": 4.001421, 00:19:08.160 "iops": 861.9437944670156, 00:19:08.160 "mibps": 57.23845510132525, 00:19:08.160 "io_failed": 0, 00:19:08.160 "io_timeout": 0, 00:19:08.160 "avg_latency_us": 1229.621264580592, 00:19:08.160 "min_latency_us": 299.32307692307694, 00:19:08.160 "max_latency_us": 2520.6153846153848 00:19:08.160 } 00:19:08.160 ], 00:19:08.160 "core_count": 1 00:19:08.160 } 00:19:08.160 12:31:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:08.160 [2024-12-16 12:31:15.093762] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:08.160 Running I/O for 4 seconds... 00:19:10.046 6869.00 IOPS, 26.83 MiB/s [2024-12-16T12:31:18.538Z] 6057.00 IOPS, 23.66 MiB/s [2024-12-16T12:31:19.109Z] 5763.67 IOPS, 22.51 MiB/s [2024-12-16T12:31:19.368Z] 5425.50 IOPS, 21.19 MiB/s 00:19:12.262 Latency(us) 00:19:12.262 [2024-12-16T12:31:19.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.262 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:12.262 ftl0 : 4.04 5407.02 21.12 0.00 0.00 23561.75 463.16 49605.71 00:19:12.262 [2024-12-16T12:31:19.368Z] =================================================================================================================== 00:19:12.262 [2024-12-16T12:31:19.368Z] Total : 5407.02 21.12 0.00 0.00 23561.75 0.00 49605.71 00:19:12.262 [2024-12-16 12:31:19.138843] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:12.262 { 00:19:12.262 "results": [ 00:19:12.262 { 00:19:12.262 "job": "ftl0", 00:19:12.262 "core_mask": "0x1", 00:19:12.262 "workload": "randwrite", 00:19:12.262 "status": "finished", 00:19:12.262 "queue_depth": 128, 00:19:12.262 "io_size": 4096, 00:19:12.262 "runtime": 4.036603, 00:19:12.262 "iops": 5407.021696213375, 00:19:12.262 "mibps": 21.121178500833498, 00:19:12.262 "io_failed": 0, 00:19:12.262 "io_timeout": 0, 00:19:12.262 "avg_latency_us": 23561.751635099987, 00:19:12.262 "min_latency_us": 463.1630769230769, 00:19:12.262 "max_latency_us": 49605.71076923077 00:19:12.262 } 00:19:12.262 ], 00:19:12.262 "core_count": 1 00:19:12.262 } 00:19:12.262 12:31:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:12.262 [2024-12-16 12:31:19.253019] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:12.262 Running I/O for 4 seconds... 00:19:14.588 4346.00 IOPS, 16.98 MiB/s [2024-12-16T12:31:22.267Z] 4743.00 IOPS, 18.53 MiB/s [2024-12-16T12:31:23.653Z] 5160.00 IOPS, 20.16 MiB/s [2024-12-16T12:31:23.653Z] 5286.25 IOPS, 20.65 MiB/s 00:19:16.547 Latency(us) 00:19:16.547 [2024-12-16T12:31:23.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.547 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:16.547 Verification LBA range: start 0x0 length 0x1400000 00:19:16.547 ftl0 : 4.01 5302.66 20.71 0.00 0.00 24075.45 340.28 87919.06 00:19:16.547 [2024-12-16T12:31:23.653Z] =================================================================================================================== 00:19:16.547 [2024-12-16T12:31:23.653Z] Total : 5302.66 20.71 0.00 0.00 24075.45 0.00 87919.06 00:19:16.547 [2024-12-16 12:31:23.279397] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:16.547 { 00:19:16.547 "results": [ 00:19:16.547 { 00:19:16.547 "job": "ftl0", 00:19:16.547 "core_mask": "0x1", 00:19:16.547 "workload": "verify", 00:19:16.547 "status": "finished", 00:19:16.547 "verify_range": { 00:19:16.547 "start": 0, 00:19:16.547 "length": 20971520 00:19:16.547 }, 00:19:16.547 "queue_depth": 128, 00:19:16.547 "io_size": 4096, 00:19:16.547 "runtime": 4.009872, 00:19:16.547 "iops": 5302.663027647765, 00:19:16.547 "mibps": 20.713527451749083, 00:19:16.547 "io_failed": 0, 00:19:16.547 "io_timeout": 0, 00:19:16.547 "avg_latency_us": 24075.45185996621, 00:19:16.547 "min_latency_us": 340.2830769230769, 00:19:16.547 "max_latency_us": 87919.06461538462 00:19:16.547 } 00:19:16.547 ], 00:19:16.547 "core_count": 1 00:19:16.547 } 00:19:16.547 12:31:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:16.547 [2024-12-16 12:31:23.483755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.547 [2024-12-16 12:31:23.483912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:16.547 [2024-12-16 12:31:23.483966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:16.547 [2024-12-16 12:31:23.483987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.547 [2024-12-16 12:31:23.484019] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:16.547 [2024-12-16 12:31:23.486285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.547 [2024-12-16 12:31:23.486378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:16.547 [2024-12-16 12:31:23.486426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.233 ms 00:19:16.547 [2024-12-16 12:31:23.486444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.547 [2024-12-16 12:31:23.489079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.547 [2024-12-16 12:31:23.489184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:16.547 [2024-12-16 12:31:23.489233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.606 ms 00:19:16.547 [2024-12-16 12:31:23.489255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.809 [2024-12-16 12:31:23.668388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.809 [2024-12-16 12:31:23.668514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:16.809 [2024-12-16 12:31:23.668575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 179.105 ms 00:19:16.809 [2024-12-16 12:31:23.668594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.809 [2024-12-16 12:31:23.673205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.809 [2024-12-16 12:31:23.673292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:16.809 [2024-12-16 12:31:23.673334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.573 ms 00:19:16.809 [2024-12-16 12:31:23.673354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.809 [2024-12-16 12:31:23.692585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.809 [2024-12-16 12:31:23.692682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:16.809 [2024-12-16 12:31:23.692725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.156 ms 00:19:16.809 [2024-12-16 12:31:23.692743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.809 [2024-12-16 12:31:23.706421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.809 [2024-12-16 12:31:23.706519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:16.809 [2024-12-16 12:31:23.706563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.644 ms 00:19:16.809 [2024-12-16 12:31:23.706581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.809 [2024-12-16 12:31:23.706701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.809 [2024-12-16 12:31:23.706723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:16.809 [2024-12-16 12:31:23.706743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:16.809 [2024-12-16 12:31:23.706759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.810 [2024-12-16 12:31:23.725591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.810 [2024-12-16 12:31:23.725679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:16.810 [2024-12-16 12:31:23.725722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.809 ms 00:19:16.810 [2024-12-16 12:31:23.725739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.810 [2024-12-16 12:31:23.744408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.810 [2024-12-16 12:31:23.744499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:16.810 [2024-12-16 12:31:23.744542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.637 ms 00:19:16.810 [2024-12-16 12:31:23.744559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.810 [2024-12-16 12:31:23.762388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.810 [2024-12-16 12:31:23.762476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:16.810 [2024-12-16 12:31:23.762518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.796 ms 00:19:16.810 [2024-12-16 12:31:23.762534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.810 [2024-12-16 12:31:23.780255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.810 [2024-12-16 12:31:23.780341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:16.810 [2024-12-16 12:31:23.780384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.660 ms 00:19:16.810 [2024-12-16 12:31:23.780401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.810 [2024-12-16 12:31:23.780432] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:16.810 [2024-12-16 12:31:23.780454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.780480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.780502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.780527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.780608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.780633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.780656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.780679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.780701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.780724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.780746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.780816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.780839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.780865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.780887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.780911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.780934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.780957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.781998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.782006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.782011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.782018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.782024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.782031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.782036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.782043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.782049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.782057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.782063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.782070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.782075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.782084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:16.810 [2024-12-16 12:31:23.782089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:16.811 [2024-12-16 12:31:23.782098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:16.811 [2024-12-16 12:31:23.782103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:16.811 [2024-12-16 12:31:23.782110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:16.811 [2024-12-16 12:31:23.782118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:16.811 [2024-12-16 12:31:23.782126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:16.811 [2024-12-16 12:31:23.782132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:16.811 [2024-12-16 12:31:23.782140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:16.811 [2024-12-16 12:31:23.782145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:16.811 [2024-12-16 12:31:23.782152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:16.811 [2024-12-16 12:31:23.782172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:16.811 [2024-12-16 12:31:23.782179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:16.811 [2024-12-16 12:31:23.782185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:16.811 [2024-12-16 12:31:23.782193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:16.811 [2024-12-16 12:31:23.782199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:16.811 [2024-12-16 12:31:23.782207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:16.811 [2024-12-16 12:31:23.782213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:16.811 [2024-12-16 12:31:23.782223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:16.811 [2024-12-16 12:31:23.782230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:16.811 [2024-12-16 12:31:23.782237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:16.811 [2024-12-16 12:31:23.782243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:16.811 [2024-12-16 12:31:23.782252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:16.811 [2024-12-16 12:31:23.782257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:16.811 [2024-12-16 12:31:23.782264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:16.811 [2024-12-16 12:31:23.782277] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:16.811 [2024-12-16 12:31:23.782286] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d5268d61-5635-41a1-8313-c6ad29f3de84 00:19:16.811 [2024-12-16 12:31:23.782294] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:16.811 [2024-12-16 12:31:23.782302] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:16.811 [2024-12-16 12:31:23.782308] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:16.811 [2024-12-16 12:31:23.782316] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:16.811 [2024-12-16 12:31:23.782321] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:16.811 [2024-12-16 12:31:23.782329] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:16.811 [2024-12-16 12:31:23.782335] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:16.811 [2024-12-16 12:31:23.782343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:16.811 [2024-12-16 12:31:23.782348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:16.811 [2024-12-16 12:31:23.782356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.811 [2024-12-16 12:31:23.782366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:16.811 [2024-12-16 12:31:23.782375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.925 ms 00:19:16.811 [2024-12-16 12:31:23.782381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.811 [2024-12-16 12:31:23.792635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.811 [2024-12-16 12:31:23.792729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:16.811 [2024-12-16 12:31:23.792744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.227 ms 00:19:16.811 [2024-12-16 12:31:23.792751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.811 [2024-12-16 12:31:23.793037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.811 [2024-12-16 12:31:23.793045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:16.811 [2024-12-16 12:31:23.793053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:19:16.811 [2024-12-16 12:31:23.793060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.811 [2024-12-16 12:31:23.822452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.811 [2024-12-16 12:31:23.822479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:16.811 [2024-12-16 12:31:23.822492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.811 [2024-12-16 12:31:23.822498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.811 [2024-12-16 12:31:23.822550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.811 [2024-12-16 12:31:23.822557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:16.811 [2024-12-16 12:31:23.822565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.811 [2024-12-16 12:31:23.822571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.811 [2024-12-16 12:31:23.822633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.811 [2024-12-16 12:31:23.822642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:16.811 [2024-12-16 12:31:23.822651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.811 [2024-12-16 12:31:23.822657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.811 [2024-12-16 12:31:23.822670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.811 [2024-12-16 12:31:23.822677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:16.811 [2024-12-16 12:31:23.822685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.811 [2024-12-16 12:31:23.822691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.811 [2024-12-16 12:31:23.886244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.811 [2024-12-16 12:31:23.886280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:16.811 [2024-12-16 12:31:23.886292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.811 [2024-12-16 12:31:23.886299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.072 [2024-12-16 12:31:23.938531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.072 [2024-12-16 12:31:23.938568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:17.072 [2024-12-16 12:31:23.938579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.072 [2024-12-16 12:31:23.938586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.072 [2024-12-16 12:31:23.938683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.072 [2024-12-16 12:31:23.938692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:17.072 [2024-12-16 12:31:23.938701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.072 [2024-12-16 12:31:23.938707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.072 [2024-12-16 12:31:23.938743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.072 [2024-12-16 12:31:23.938751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:17.072 [2024-12-16 12:31:23.938759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.072 [2024-12-16 12:31:23.938766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.072 [2024-12-16 12:31:23.938842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.072 [2024-12-16 12:31:23.938853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:17.072 [2024-12-16 12:31:23.938863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.072 [2024-12-16 12:31:23.938869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.072 [2024-12-16 12:31:23.938898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.072 [2024-12-16 12:31:23.938906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:17.072 [2024-12-16 12:31:23.938914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.072 [2024-12-16 12:31:23.938921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.072 [2024-12-16 12:31:23.938955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.072 [2024-12-16 12:31:23.938965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:17.072 [2024-12-16 12:31:23.938974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.072 [2024-12-16 12:31:23.938987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.072 [2024-12-16 12:31:23.939030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.072 [2024-12-16 12:31:23.939040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:17.072 [2024-12-16 12:31:23.939048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.072 [2024-12-16 12:31:23.939055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.072 [2024-12-16 12:31:23.939204] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 455.373 ms, result 0 00:19:17.072 true 00:19:17.072 12:31:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77703 00:19:17.072 12:31:23 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77703 ']' 00:19:17.072 12:31:23 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77703 00:19:17.072 12:31:23 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:17.072 12:31:23 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.072 12:31:23 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77703 00:19:17.072 killing process with pid 77703 00:19:17.072 Received shutdown signal, test time was about 4.000000 seconds 00:19:17.072 00:19:17.072 Latency(us) 00:19:17.072 [2024-12-16T12:31:24.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.072 [2024-12-16T12:31:24.178Z] =================================================================================================================== 00:19:17.072 [2024-12-16T12:31:24.178Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:17.072 12:31:23 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:17.072 12:31:23 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:17.072 12:31:23 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77703' 00:19:17.072 12:31:23 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77703 00:19:17.072 12:31:23 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77703 00:19:20.371 Remove shared memory files 00:19:20.371 12:31:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:20.371 12:31:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:20.371 12:31:26 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:20.371 12:31:26 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:20.371 12:31:26 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:20.371 12:31:26 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:20.371 12:31:26 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:20.371 12:31:26 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:20.371 ************************************ 00:19:20.371 END TEST ftl_bdevperf 00:19:20.371 ************************************ 00:19:20.371 00:19:20.371 real 0m23.610s 00:19:20.371 user 0m26.242s 00:19:20.371 sys 0m0.880s 00:19:20.371 12:31:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.371 12:31:26 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:20.371 12:31:26 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:20.371 12:31:26 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:20.371 12:31:26 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.371 12:31:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:20.371 ************************************ 00:19:20.371 START TEST ftl_trim 00:19:20.371 ************************************ 00:19:20.371 12:31:26 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:20.371 * Looking for test storage... 00:19:20.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:20.371 12:31:26 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:20.371 12:31:26 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:19:20.371 12:31:26 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:20.371 12:31:26 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:20.371 12:31:26 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:20.371 12:31:26 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:20.371 12:31:26 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:20.371 12:31:26 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:20.371 12:31:26 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:20.371 12:31:26 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:20.371 12:31:26 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:20.371 12:31:26 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:20.371 12:31:26 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:20.371 12:31:26 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:20.371 12:31:26 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:20.371 12:31:26 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:20.371 12:31:26 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:20.371 12:31:26 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:20.371 12:31:26 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:20.371 12:31:26 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:20.371 12:31:26 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:20.371 12:31:26 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:20.371 12:31:26 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:20.371 12:31:26 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:20.371 12:31:26 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:20.371 12:31:26 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:20.371 12:31:26 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:20.372 12:31:26 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:20.372 12:31:26 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:20.372 12:31:26 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:20.372 12:31:26 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:20.372 12:31:26 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:20.372 12:31:26 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:20.372 12:31:26 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:20.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.372 --rc genhtml_branch_coverage=1 00:19:20.372 --rc genhtml_function_coverage=1 00:19:20.372 --rc genhtml_legend=1 00:19:20.372 --rc geninfo_all_blocks=1 00:19:20.372 --rc geninfo_unexecuted_blocks=1 00:19:20.372 00:19:20.372 ' 00:19:20.372 12:31:26 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:20.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.372 --rc genhtml_branch_coverage=1 00:19:20.372 --rc genhtml_function_coverage=1 00:19:20.372 --rc genhtml_legend=1 00:19:20.372 --rc geninfo_all_blocks=1 00:19:20.372 --rc geninfo_unexecuted_blocks=1 00:19:20.372 00:19:20.372 ' 00:19:20.372 12:31:26 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:20.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.372 --rc genhtml_branch_coverage=1 00:19:20.372 --rc genhtml_function_coverage=1 00:19:20.372 --rc genhtml_legend=1 00:19:20.372 --rc geninfo_all_blocks=1 00:19:20.372 --rc geninfo_unexecuted_blocks=1 00:19:20.372 00:19:20.372 ' 00:19:20.372 12:31:26 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:20.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.372 --rc genhtml_branch_coverage=1 00:19:20.372 --rc genhtml_function_coverage=1 00:19:20.372 --rc genhtml_legend=1 00:19:20.372 --rc geninfo_all_blocks=1 00:19:20.372 --rc geninfo_unexecuted_blocks=1 00:19:20.372 00:19:20.372 ' 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78049 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78049 00:19:20.372 12:31:26 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:20.372 12:31:26 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78049 ']' 00:19:20.372 12:31:26 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.372 12:31:26 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.372 12:31:26 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.372 12:31:26 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.372 12:31:26 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:20.372 [2024-12-16 12:31:27.051790] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:20.372 [2024-12-16 12:31:27.052065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78049 ] 00:19:20.372 [2024-12-16 12:31:27.208243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:20.372 [2024-12-16 12:31:27.296575] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.372 [2024-12-16 12:31:27.296877] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.372 [2024-12-16 12:31:27.296896] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.944 12:31:27 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.944 12:31:27 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:20.944 12:31:27 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:20.944 12:31:27 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:20.944 12:31:27 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:20.944 12:31:27 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:20.944 12:31:27 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:20.944 12:31:27 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:21.204 12:31:28 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:21.204 12:31:28 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:21.204 12:31:28 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:21.204 12:31:28 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:21.204 12:31:28 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:21.204 12:31:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:21.204 12:31:28 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:21.204 12:31:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:21.463 12:31:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:21.463 { 00:19:21.463 "name": "nvme0n1", 00:19:21.463 "aliases": [ 00:19:21.463 "e69a476e-911d-4674-82f3-04abe07df3d1" 00:19:21.463 ], 00:19:21.463 "product_name": "NVMe disk", 00:19:21.463 "block_size": 4096, 00:19:21.463 "num_blocks": 1310720, 00:19:21.463 "uuid": "e69a476e-911d-4674-82f3-04abe07df3d1", 00:19:21.463 "numa_id": -1, 00:19:21.463 "assigned_rate_limits": { 00:19:21.463 "rw_ios_per_sec": 0, 00:19:21.463 "rw_mbytes_per_sec": 0, 00:19:21.463 "r_mbytes_per_sec": 0, 00:19:21.463 "w_mbytes_per_sec": 0 00:19:21.463 }, 00:19:21.463 "claimed": true, 00:19:21.463 "claim_type": "read_many_write_one", 00:19:21.463 "zoned": false, 00:19:21.463 "supported_io_types": { 00:19:21.463 "read": true, 00:19:21.463 "write": true, 00:19:21.463 "unmap": true, 00:19:21.463 "flush": true, 00:19:21.463 "reset": true, 00:19:21.463 "nvme_admin": true, 00:19:21.463 "nvme_io": true, 00:19:21.463 "nvme_io_md": false, 00:19:21.463 "write_zeroes": true, 00:19:21.463 "zcopy": false, 00:19:21.463 "get_zone_info": false, 00:19:21.463 "zone_management": false, 00:19:21.463 "zone_append": false, 00:19:21.463 "compare": true, 00:19:21.463 "compare_and_write": false, 00:19:21.463 "abort": true, 00:19:21.463 "seek_hole": false, 00:19:21.463 "seek_data": false, 00:19:21.463 "copy": true, 00:19:21.463 "nvme_iov_md": false 00:19:21.463 }, 00:19:21.463 "driver_specific": { 00:19:21.463 "nvme": [ 00:19:21.463 { 00:19:21.463 "pci_address": "0000:00:11.0", 00:19:21.463 "trid": { 00:19:21.463 "trtype": "PCIe", 00:19:21.463 "traddr": "0000:00:11.0" 00:19:21.463 }, 00:19:21.463 "ctrlr_data": { 00:19:21.463 "cntlid": 0, 00:19:21.463 "vendor_id": "0x1b36", 00:19:21.463 "model_number": "QEMU NVMe Ctrl", 00:19:21.463 "serial_number": "12341", 00:19:21.463 "firmware_revision": "8.0.0", 00:19:21.463 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:21.463 "oacs": { 00:19:21.463 "security": 0, 00:19:21.463 "format": 1, 00:19:21.463 "firmware": 0, 00:19:21.463 "ns_manage": 1 00:19:21.463 }, 00:19:21.463 "multi_ctrlr": false, 00:19:21.463 "ana_reporting": false 00:19:21.463 }, 00:19:21.463 "vs": { 00:19:21.463 "nvme_version": "1.4" 00:19:21.463 }, 00:19:21.463 "ns_data": { 00:19:21.463 "id": 1, 00:19:21.463 "can_share": false 00:19:21.463 } 00:19:21.463 } 00:19:21.463 ], 00:19:21.463 "mp_policy": "active_passive" 00:19:21.463 } 00:19:21.463 } 00:19:21.463 ]' 00:19:21.463 12:31:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:21.463 12:31:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:21.463 12:31:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:21.463 12:31:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:21.463 12:31:28 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:21.463 12:31:28 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:21.463 12:31:28 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:21.463 12:31:28 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:21.463 12:31:28 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:21.463 12:31:28 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:21.463 12:31:28 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:21.722 12:31:28 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=15410f5c-a6d1-412b-b655-793df453aa91 00:19:21.722 12:31:28 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:21.722 12:31:28 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 15410f5c-a6d1-412b-b655-793df453aa91 00:19:21.982 12:31:28 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:22.244 12:31:29 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=4a9ddcee-e232-496e-80ee-d1597a2ad774 00:19:22.244 12:31:29 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4a9ddcee-e232-496e-80ee-d1597a2ad774 00:19:22.244 12:31:29 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=4bd3c74e-2c82-4296-940d-0bfcb9ba0e6c 00:19:22.244 12:31:29 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4bd3c74e-2c82-4296-940d-0bfcb9ba0e6c 00:19:22.244 12:31:29 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:22.244 12:31:29 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:22.244 12:31:29 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=4bd3c74e-2c82-4296-940d-0bfcb9ba0e6c 00:19:22.244 12:31:29 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:22.244 12:31:29 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 4bd3c74e-2c82-4296-940d-0bfcb9ba0e6c 00:19:22.244 12:31:29 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=4bd3c74e-2c82-4296-940d-0bfcb9ba0e6c 00:19:22.244 12:31:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:22.244 12:31:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:22.244 12:31:29 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:22.244 12:31:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4bd3c74e-2c82-4296-940d-0bfcb9ba0e6c 00:19:22.502 12:31:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:22.502 { 00:19:22.502 "name": "4bd3c74e-2c82-4296-940d-0bfcb9ba0e6c", 00:19:22.502 "aliases": [ 00:19:22.502 "lvs/nvme0n1p0" 00:19:22.502 ], 00:19:22.502 "product_name": "Logical Volume", 00:19:22.502 "block_size": 4096, 00:19:22.502 "num_blocks": 26476544, 00:19:22.502 "uuid": "4bd3c74e-2c82-4296-940d-0bfcb9ba0e6c", 00:19:22.502 "assigned_rate_limits": { 00:19:22.502 "rw_ios_per_sec": 0, 00:19:22.502 "rw_mbytes_per_sec": 0, 00:19:22.502 "r_mbytes_per_sec": 0, 00:19:22.502 "w_mbytes_per_sec": 0 00:19:22.502 }, 00:19:22.502 "claimed": false, 00:19:22.502 "zoned": false, 00:19:22.502 "supported_io_types": { 00:19:22.502 "read": true, 00:19:22.502 "write": true, 00:19:22.502 "unmap": true, 00:19:22.502 "flush": false, 00:19:22.502 "reset": true, 00:19:22.502 "nvme_admin": false, 00:19:22.502 "nvme_io": false, 00:19:22.502 "nvme_io_md": false, 00:19:22.502 "write_zeroes": true, 00:19:22.502 "zcopy": false, 00:19:22.502 "get_zone_info": false, 00:19:22.502 "zone_management": false, 00:19:22.502 "zone_append": false, 00:19:22.502 "compare": false, 00:19:22.502 "compare_and_write": false, 00:19:22.502 "abort": false, 00:19:22.502 "seek_hole": true, 00:19:22.502 "seek_data": true, 00:19:22.502 "copy": false, 00:19:22.502 "nvme_iov_md": false 00:19:22.502 }, 00:19:22.502 "driver_specific": { 00:19:22.503 "lvol": { 00:19:22.503 "lvol_store_uuid": "4a9ddcee-e232-496e-80ee-d1597a2ad774", 00:19:22.503 "base_bdev": "nvme0n1", 00:19:22.503 "thin_provision": true, 00:19:22.503 "num_allocated_clusters": 0, 00:19:22.503 "snapshot": false, 00:19:22.503 "clone": false, 00:19:22.503 "esnap_clone": false 00:19:22.503 } 00:19:22.503 } 00:19:22.503 } 00:19:22.503 ]' 00:19:22.503 12:31:29 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:22.503 12:31:29 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:22.503 12:31:29 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:22.503 12:31:29 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:22.503 12:31:29 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:22.503 12:31:29 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:22.503 12:31:29 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:22.503 12:31:29 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:22.503 12:31:29 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:22.761 12:31:29 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:22.761 12:31:29 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:22.761 12:31:29 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 4bd3c74e-2c82-4296-940d-0bfcb9ba0e6c 00:19:22.761 12:31:29 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=4bd3c74e-2c82-4296-940d-0bfcb9ba0e6c 00:19:22.761 12:31:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:22.761 12:31:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:22.761 12:31:29 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:22.761 12:31:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4bd3c74e-2c82-4296-940d-0bfcb9ba0e6c 00:19:23.019 12:31:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:23.019 { 00:19:23.019 "name": "4bd3c74e-2c82-4296-940d-0bfcb9ba0e6c", 00:19:23.019 "aliases": [ 00:19:23.019 "lvs/nvme0n1p0" 00:19:23.019 ], 00:19:23.019 "product_name": "Logical Volume", 00:19:23.019 "block_size": 4096, 00:19:23.019 "num_blocks": 26476544, 00:19:23.019 "uuid": "4bd3c74e-2c82-4296-940d-0bfcb9ba0e6c", 00:19:23.019 "assigned_rate_limits": { 00:19:23.019 "rw_ios_per_sec": 0, 00:19:23.019 "rw_mbytes_per_sec": 0, 00:19:23.019 "r_mbytes_per_sec": 0, 00:19:23.019 "w_mbytes_per_sec": 0 00:19:23.019 }, 00:19:23.019 "claimed": false, 00:19:23.019 "zoned": false, 00:19:23.019 "supported_io_types": { 00:19:23.019 "read": true, 00:19:23.019 "write": true, 00:19:23.019 "unmap": true, 00:19:23.019 "flush": false, 00:19:23.019 "reset": true, 00:19:23.019 "nvme_admin": false, 00:19:23.019 "nvme_io": false, 00:19:23.019 "nvme_io_md": false, 00:19:23.019 "write_zeroes": true, 00:19:23.019 "zcopy": false, 00:19:23.019 "get_zone_info": false, 00:19:23.019 "zone_management": false, 00:19:23.019 "zone_append": false, 00:19:23.019 "compare": false, 00:19:23.019 "compare_and_write": false, 00:19:23.019 "abort": false, 00:19:23.019 "seek_hole": true, 00:19:23.019 "seek_data": true, 00:19:23.019 "copy": false, 00:19:23.019 "nvme_iov_md": false 00:19:23.019 }, 00:19:23.019 "driver_specific": { 00:19:23.019 "lvol": { 00:19:23.019 "lvol_store_uuid": "4a9ddcee-e232-496e-80ee-d1597a2ad774", 00:19:23.019 "base_bdev": "nvme0n1", 00:19:23.019 "thin_provision": true, 00:19:23.019 "num_allocated_clusters": 0, 00:19:23.019 "snapshot": false, 00:19:23.019 "clone": false, 00:19:23.019 "esnap_clone": false 00:19:23.019 } 00:19:23.019 } 00:19:23.019 } 00:19:23.019 ]' 00:19:23.019 12:31:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:23.019 12:31:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:23.019 12:31:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:23.019 12:31:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:23.019 12:31:30 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:23.019 12:31:30 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:23.019 12:31:30 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:23.019 12:31:30 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:23.278 12:31:30 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:23.278 12:31:30 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:23.278 12:31:30 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 4bd3c74e-2c82-4296-940d-0bfcb9ba0e6c 00:19:23.278 12:31:30 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=4bd3c74e-2c82-4296-940d-0bfcb9ba0e6c 00:19:23.278 12:31:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:23.278 12:31:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:23.278 12:31:30 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:23.278 12:31:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4bd3c74e-2c82-4296-940d-0bfcb9ba0e6c 00:19:23.536 12:31:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:23.536 { 00:19:23.536 "name": "4bd3c74e-2c82-4296-940d-0bfcb9ba0e6c", 00:19:23.536 "aliases": [ 00:19:23.536 "lvs/nvme0n1p0" 00:19:23.536 ], 00:19:23.536 "product_name": "Logical Volume", 00:19:23.536 "block_size": 4096, 00:19:23.536 "num_blocks": 26476544, 00:19:23.536 "uuid": "4bd3c74e-2c82-4296-940d-0bfcb9ba0e6c", 00:19:23.536 "assigned_rate_limits": { 00:19:23.536 "rw_ios_per_sec": 0, 00:19:23.536 "rw_mbytes_per_sec": 0, 00:19:23.536 "r_mbytes_per_sec": 0, 00:19:23.536 "w_mbytes_per_sec": 0 00:19:23.536 }, 00:19:23.536 "claimed": false, 00:19:23.536 "zoned": false, 00:19:23.536 "supported_io_types": { 00:19:23.536 "read": true, 00:19:23.536 "write": true, 00:19:23.536 "unmap": true, 00:19:23.536 "flush": false, 00:19:23.536 "reset": true, 00:19:23.536 "nvme_admin": false, 00:19:23.536 "nvme_io": false, 00:19:23.536 "nvme_io_md": false, 00:19:23.536 "write_zeroes": true, 00:19:23.536 "zcopy": false, 00:19:23.536 "get_zone_info": false, 00:19:23.536 "zone_management": false, 00:19:23.536 "zone_append": false, 00:19:23.536 "compare": false, 00:19:23.536 "compare_and_write": false, 00:19:23.536 "abort": false, 00:19:23.536 "seek_hole": true, 00:19:23.536 "seek_data": true, 00:19:23.536 "copy": false, 00:19:23.536 "nvme_iov_md": false 00:19:23.536 }, 00:19:23.536 "driver_specific": { 00:19:23.536 "lvol": { 00:19:23.536 "lvol_store_uuid": "4a9ddcee-e232-496e-80ee-d1597a2ad774", 00:19:23.536 "base_bdev": "nvme0n1", 00:19:23.536 "thin_provision": true, 00:19:23.536 "num_allocated_clusters": 0, 00:19:23.536 "snapshot": false, 00:19:23.536 "clone": false, 00:19:23.536 "esnap_clone": false 00:19:23.536 } 00:19:23.536 } 00:19:23.536 } 00:19:23.536 ]' 00:19:23.536 12:31:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:23.536 12:31:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:23.536 12:31:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:23.536 12:31:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:23.536 12:31:30 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:23.536 12:31:30 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:23.536 12:31:30 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:23.536 12:31:30 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4bd3c74e-2c82-4296-940d-0bfcb9ba0e6c -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:23.795 [2024-12-16 12:31:30.742680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.795 [2024-12-16 12:31:30.742721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:23.795 [2024-12-16 12:31:30.742736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:23.795 [2024-12-16 12:31:30.742743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.795 [2024-12-16 12:31:30.745178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.795 [2024-12-16 12:31:30.745207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:23.795 [2024-12-16 12:31:30.745217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.402 ms 00:19:23.795 [2024-12-16 12:31:30.745225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.795 [2024-12-16 12:31:30.745315] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:23.795 [2024-12-16 12:31:30.745864] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:23.795 [2024-12-16 12:31:30.745889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.795 [2024-12-16 12:31:30.745897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:23.795 [2024-12-16 12:31:30.745906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:19:23.795 [2024-12-16 12:31:30.745912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.795 [2024-12-16 12:31:30.745990] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6f50ddf9-d4f2-4e3f-9574-029fb265c97a 00:19:23.795 [2024-12-16 12:31:30.747279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.795 [2024-12-16 12:31:30.747310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:23.795 [2024-12-16 12:31:30.747319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:23.795 [2024-12-16 12:31:30.747328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.795 [2024-12-16 12:31:30.754112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.795 [2024-12-16 12:31:30.754138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:23.795 [2024-12-16 12:31:30.754148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.716 ms 00:19:23.795 [2024-12-16 12:31:30.754169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.795 [2024-12-16 12:31:30.754282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.795 [2024-12-16 12:31:30.754294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:23.795 [2024-12-16 12:31:30.754301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:23.795 [2024-12-16 12:31:30.754314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.795 [2024-12-16 12:31:30.754344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.795 [2024-12-16 12:31:30.754353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:23.795 [2024-12-16 12:31:30.754359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:23.795 [2024-12-16 12:31:30.754369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.795 [2024-12-16 12:31:30.754397] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:23.795 [2024-12-16 12:31:30.757608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.795 [2024-12-16 12:31:30.757636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:23.795 [2024-12-16 12:31:30.757646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.212 ms 00:19:23.795 [2024-12-16 12:31:30.757653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.795 [2024-12-16 12:31:30.757711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.795 [2024-12-16 12:31:30.757732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:23.795 [2024-12-16 12:31:30.757741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:23.795 [2024-12-16 12:31:30.757747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.795 [2024-12-16 12:31:30.757771] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:23.795 [2024-12-16 12:31:30.757887] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:23.795 [2024-12-16 12:31:30.757905] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:23.795 [2024-12-16 12:31:30.757914] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:23.795 [2024-12-16 12:31:30.757924] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:23.795 [2024-12-16 12:31:30.757931] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:23.795 [2024-12-16 12:31:30.757940] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:23.795 [2024-12-16 12:31:30.757946] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:23.795 [2024-12-16 12:31:30.757954] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:23.795 [2024-12-16 12:31:30.757962] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:23.795 [2024-12-16 12:31:30.757970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.795 [2024-12-16 12:31:30.757976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:23.795 [2024-12-16 12:31:30.757985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:19:23.795 [2024-12-16 12:31:30.757991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.795 [2024-12-16 12:31:30.758071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.795 [2024-12-16 12:31:30.758078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:23.795 [2024-12-16 12:31:30.758086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:23.795 [2024-12-16 12:31:30.758092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.795 [2024-12-16 12:31:30.758225] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:23.795 [2024-12-16 12:31:30.758236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:23.795 [2024-12-16 12:31:30.758244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:23.795 [2024-12-16 12:31:30.758250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.795 [2024-12-16 12:31:30.758258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:23.795 [2024-12-16 12:31:30.758264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:23.795 [2024-12-16 12:31:30.758272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:23.795 [2024-12-16 12:31:30.758278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:23.795 [2024-12-16 12:31:30.758285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:23.795 [2024-12-16 12:31:30.758290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:23.795 [2024-12-16 12:31:30.758298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:23.795 [2024-12-16 12:31:30.758303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:23.795 [2024-12-16 12:31:30.758309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:23.795 [2024-12-16 12:31:30.758314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:23.795 [2024-12-16 12:31:30.758321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:23.795 [2024-12-16 12:31:30.758327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.795 [2024-12-16 12:31:30.758335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:23.795 [2024-12-16 12:31:30.758341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:23.795 [2024-12-16 12:31:30.758347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.795 [2024-12-16 12:31:30.758352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:23.795 [2024-12-16 12:31:30.758358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:23.795 [2024-12-16 12:31:30.758364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:23.795 [2024-12-16 12:31:30.758371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:23.795 [2024-12-16 12:31:30.758378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:23.795 [2024-12-16 12:31:30.758385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:23.795 [2024-12-16 12:31:30.758391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:23.795 [2024-12-16 12:31:30.758397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:23.795 [2024-12-16 12:31:30.758402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:23.796 [2024-12-16 12:31:30.758409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:23.796 [2024-12-16 12:31:30.758414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:23.796 [2024-12-16 12:31:30.758421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:23.796 [2024-12-16 12:31:30.758426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:23.796 [2024-12-16 12:31:30.758434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:23.796 [2024-12-16 12:31:30.758439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:23.796 [2024-12-16 12:31:30.758446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:23.796 [2024-12-16 12:31:30.758452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:23.796 [2024-12-16 12:31:30.758459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:23.796 [2024-12-16 12:31:30.758464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:23.796 [2024-12-16 12:31:30.758471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:23.796 [2024-12-16 12:31:30.758476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.796 [2024-12-16 12:31:30.758483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:23.796 [2024-12-16 12:31:30.758488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:23.796 [2024-12-16 12:31:30.758495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.796 [2024-12-16 12:31:30.758501] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:23.796 [2024-12-16 12:31:30.758508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:23.796 [2024-12-16 12:31:30.758515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:23.796 [2024-12-16 12:31:30.758523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.796 [2024-12-16 12:31:30.758529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:23.796 [2024-12-16 12:31:30.758537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:23.796 [2024-12-16 12:31:30.758543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:23.796 [2024-12-16 12:31:30.758549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:23.796 [2024-12-16 12:31:30.758556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:23.796 [2024-12-16 12:31:30.758562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:23.796 [2024-12-16 12:31:30.758569] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:23.796 [2024-12-16 12:31:30.758578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:23.796 [2024-12-16 12:31:30.758589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:23.796 [2024-12-16 12:31:30.758597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:23.796 [2024-12-16 12:31:30.758602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:23.796 [2024-12-16 12:31:30.758609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:23.796 [2024-12-16 12:31:30.758615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:23.796 [2024-12-16 12:31:30.758622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:23.796 [2024-12-16 12:31:30.758628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:23.796 [2024-12-16 12:31:30.758636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:23.796 [2024-12-16 12:31:30.758641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:23.796 [2024-12-16 12:31:30.758650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:23.796 [2024-12-16 12:31:30.758655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:23.796 [2024-12-16 12:31:30.758662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:23.796 [2024-12-16 12:31:30.758668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:23.796 [2024-12-16 12:31:30.758675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:23.796 [2024-12-16 12:31:30.758680] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:23.796 [2024-12-16 12:31:30.758691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:23.796 [2024-12-16 12:31:30.758697] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:23.796 [2024-12-16 12:31:30.758704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:23.796 [2024-12-16 12:31:30.758710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:23.796 [2024-12-16 12:31:30.758717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:23.796 [2024-12-16 12:31:30.758724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.796 [2024-12-16 12:31:30.758732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:23.796 [2024-12-16 12:31:30.758738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:19:23.796 [2024-12-16 12:31:30.758746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.796 [2024-12-16 12:31:30.758826] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:23.796 [2024-12-16 12:31:30.758839] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:26.322 [2024-12-16 12:31:33.331520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.322 [2024-12-16 12:31:33.331710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:26.322 [2024-12-16 12:31:33.331732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2572.682 ms 00:19:26.322 [2024-12-16 12:31:33.331744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.322 [2024-12-16 12:31:33.360288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.322 [2024-12-16 12:31:33.360331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:26.322 [2024-12-16 12:31:33.360345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.282 ms 00:19:26.322 [2024-12-16 12:31:33.360355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.322 [2024-12-16 12:31:33.360511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.322 [2024-12-16 12:31:33.360525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:26.322 [2024-12-16 12:31:33.360550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:26.322 [2024-12-16 12:31:33.360562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.322 [2024-12-16 12:31:33.404649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.322 [2024-12-16 12:31:33.404818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:26.322 [2024-12-16 12:31:33.404837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.054 ms 00:19:26.322 [2024-12-16 12:31:33.404848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.322 [2024-12-16 12:31:33.404934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.322 [2024-12-16 12:31:33.404947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:26.322 [2024-12-16 12:31:33.404957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:26.322 [2024-12-16 12:31:33.404966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.322 [2024-12-16 12:31:33.405425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.322 [2024-12-16 12:31:33.405448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:26.322 [2024-12-16 12:31:33.405459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:19:26.322 [2024-12-16 12:31:33.405469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.322 [2024-12-16 12:31:33.405591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.322 [2024-12-16 12:31:33.405602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:26.322 [2024-12-16 12:31:33.405626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:26.322 [2024-12-16 12:31:33.405638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.322 [2024-12-16 12:31:33.421689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.322 [2024-12-16 12:31:33.421724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:26.322 [2024-12-16 12:31:33.421734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.020 ms 00:19:26.322 [2024-12-16 12:31:33.421744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.580 [2024-12-16 12:31:33.433900] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:26.580 [2024-12-16 12:31:33.451356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.580 [2024-12-16 12:31:33.451510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:26.580 [2024-12-16 12:31:33.451529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.513 ms 00:19:26.580 [2024-12-16 12:31:33.451537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.580 [2024-12-16 12:31:33.519071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.580 [2024-12-16 12:31:33.519232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:26.580 [2024-12-16 12:31:33.519255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.466 ms 00:19:26.580 [2024-12-16 12:31:33.519264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.580 [2024-12-16 12:31:33.519533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.580 [2024-12-16 12:31:33.519546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:26.580 [2024-12-16 12:31:33.519560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:19:26.580 [2024-12-16 12:31:33.519569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.580 [2024-12-16 12:31:33.542802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.580 [2024-12-16 12:31:33.542928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:26.580 [2024-12-16 12:31:33.542950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.200 ms 00:19:26.580 [2024-12-16 12:31:33.542957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.580 [2024-12-16 12:31:33.565403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.580 [2024-12-16 12:31:33.565524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:26.580 [2024-12-16 12:31:33.565543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.308 ms 00:19:26.580 [2024-12-16 12:31:33.565550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.580 [2024-12-16 12:31:33.566329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.580 [2024-12-16 12:31:33.566379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:26.580 [2024-12-16 12:31:33.566390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:19:26.580 [2024-12-16 12:31:33.566399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.580 [2024-12-16 12:31:33.635474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.580 [2024-12-16 12:31:33.635508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:26.580 [2024-12-16 12:31:33.635523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.040 ms 00:19:26.580 [2024-12-16 12:31:33.635531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.580 [2024-12-16 12:31:33.660119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.580 [2024-12-16 12:31:33.660154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:26.580 [2024-12-16 12:31:33.660182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.499 ms 00:19:26.580 [2024-12-16 12:31:33.660191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.580 [2024-12-16 12:31:33.683258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.580 [2024-12-16 12:31:33.683373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:26.581 [2024-12-16 12:31:33.683392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.003 ms 00:19:26.581 [2024-12-16 12:31:33.683400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.839 [2024-12-16 12:31:33.707032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.839 [2024-12-16 12:31:33.707171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:26.839 [2024-12-16 12:31:33.707191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.561 ms 00:19:26.839 [2024-12-16 12:31:33.707198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.839 [2024-12-16 12:31:33.707261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.839 [2024-12-16 12:31:33.707273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:26.839 [2024-12-16 12:31:33.707286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:26.839 [2024-12-16 12:31:33.707293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.839 [2024-12-16 12:31:33.707373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.839 [2024-12-16 12:31:33.707384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:26.839 [2024-12-16 12:31:33.707394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:26.839 [2024-12-16 12:31:33.707401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.839 [2024-12-16 12:31:33.708304] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:26.839 [2024-12-16 12:31:33.711296] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2965.302 ms, result 0 00:19:26.839 [2024-12-16 12:31:33.712587] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:26.839 { 00:19:26.839 "name": "ftl0", 00:19:26.839 "uuid": "6f50ddf9-d4f2-4e3f-9574-029fb265c97a" 00:19:26.839 } 00:19:26.839 12:31:33 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:26.839 12:31:33 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:26.839 12:31:33 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:26.839 12:31:33 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:26.839 12:31:33 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:26.839 12:31:33 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:26.839 12:31:33 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:26.839 12:31:33 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:27.110 [ 00:19:27.110 { 00:19:27.110 "name": "ftl0", 00:19:27.110 "aliases": [ 00:19:27.110 "6f50ddf9-d4f2-4e3f-9574-029fb265c97a" 00:19:27.110 ], 00:19:27.110 "product_name": "FTL disk", 00:19:27.110 "block_size": 4096, 00:19:27.110 "num_blocks": 23592960, 00:19:27.110 "uuid": "6f50ddf9-d4f2-4e3f-9574-029fb265c97a", 00:19:27.110 "assigned_rate_limits": { 00:19:27.110 "rw_ios_per_sec": 0, 00:19:27.110 "rw_mbytes_per_sec": 0, 00:19:27.110 "r_mbytes_per_sec": 0, 00:19:27.110 "w_mbytes_per_sec": 0 00:19:27.110 }, 00:19:27.110 "claimed": false, 00:19:27.110 "zoned": false, 00:19:27.110 "supported_io_types": { 00:19:27.110 "read": true, 00:19:27.110 "write": true, 00:19:27.110 "unmap": true, 00:19:27.110 "flush": true, 00:19:27.110 "reset": false, 00:19:27.110 "nvme_admin": false, 00:19:27.110 "nvme_io": false, 00:19:27.110 "nvme_io_md": false, 00:19:27.110 "write_zeroes": true, 00:19:27.110 "zcopy": false, 00:19:27.110 "get_zone_info": false, 00:19:27.110 "zone_management": false, 00:19:27.110 "zone_append": false, 00:19:27.110 "compare": false, 00:19:27.110 "compare_and_write": false, 00:19:27.110 "abort": false, 00:19:27.110 "seek_hole": false, 00:19:27.110 "seek_data": false, 00:19:27.110 "copy": false, 00:19:27.110 "nvme_iov_md": false 00:19:27.110 }, 00:19:27.110 "driver_specific": { 00:19:27.110 "ftl": { 00:19:27.110 "base_bdev": "4bd3c74e-2c82-4296-940d-0bfcb9ba0e6c", 00:19:27.110 "cache": "nvc0n1p0" 00:19:27.110 } 00:19:27.110 } 00:19:27.110 } 00:19:27.110 ] 00:19:27.110 12:31:34 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:27.110 12:31:34 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:27.110 12:31:34 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:27.405 12:31:34 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:27.405 12:31:34 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:27.711 12:31:34 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:27.711 { 00:19:27.711 "name": "ftl0", 00:19:27.711 "aliases": [ 00:19:27.711 "6f50ddf9-d4f2-4e3f-9574-029fb265c97a" 00:19:27.711 ], 00:19:27.711 "product_name": "FTL disk", 00:19:27.711 "block_size": 4096, 00:19:27.711 "num_blocks": 23592960, 00:19:27.711 "uuid": "6f50ddf9-d4f2-4e3f-9574-029fb265c97a", 00:19:27.711 "assigned_rate_limits": { 00:19:27.711 "rw_ios_per_sec": 0, 00:19:27.711 "rw_mbytes_per_sec": 0, 00:19:27.711 "r_mbytes_per_sec": 0, 00:19:27.711 "w_mbytes_per_sec": 0 00:19:27.711 }, 00:19:27.711 "claimed": false, 00:19:27.711 "zoned": false, 00:19:27.711 "supported_io_types": { 00:19:27.711 "read": true, 00:19:27.711 "write": true, 00:19:27.711 "unmap": true, 00:19:27.711 "flush": true, 00:19:27.711 "reset": false, 00:19:27.711 "nvme_admin": false, 00:19:27.711 "nvme_io": false, 00:19:27.711 "nvme_io_md": false, 00:19:27.711 "write_zeroes": true, 00:19:27.711 "zcopy": false, 00:19:27.711 "get_zone_info": false, 00:19:27.711 "zone_management": false, 00:19:27.711 "zone_append": false, 00:19:27.711 "compare": false, 00:19:27.711 "compare_and_write": false, 00:19:27.711 "abort": false, 00:19:27.711 "seek_hole": false, 00:19:27.711 "seek_data": false, 00:19:27.711 "copy": false, 00:19:27.711 "nvme_iov_md": false 00:19:27.711 }, 00:19:27.711 "driver_specific": { 00:19:27.711 "ftl": { 00:19:27.711 "base_bdev": "4bd3c74e-2c82-4296-940d-0bfcb9ba0e6c", 00:19:27.711 "cache": "nvc0n1p0" 00:19:27.711 } 00:19:27.711 } 00:19:27.711 } 00:19:27.711 ]' 00:19:27.711 12:31:34 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:27.711 12:31:34 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:27.711 12:31:34 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:27.711 [2024-12-16 12:31:34.760072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.711 [2024-12-16 12:31:34.760104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:27.711 [2024-12-16 12:31:34.760116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:27.711 [2024-12-16 12:31:34.760125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.711 [2024-12-16 12:31:34.760154] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:27.711 [2024-12-16 12:31:34.762398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.711 [2024-12-16 12:31:34.762510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:27.711 [2024-12-16 12:31:34.762532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.218 ms 00:19:27.711 [2024-12-16 12:31:34.762539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.711 [2024-12-16 12:31:34.763026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.711 [2024-12-16 12:31:34.763040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:27.711 [2024-12-16 12:31:34.763049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:19:27.711 [2024-12-16 12:31:34.763056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.711 [2024-12-16 12:31:34.765821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.711 [2024-12-16 12:31:34.765913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:27.711 [2024-12-16 12:31:34.765927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.735 ms 00:19:27.711 [2024-12-16 12:31:34.765933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.711 [2024-12-16 12:31:34.771363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.711 [2024-12-16 12:31:34.771442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:27.711 [2024-12-16 12:31:34.771486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.379 ms 00:19:27.711 [2024-12-16 12:31:34.771505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.711 [2024-12-16 12:31:34.789639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.711 [2024-12-16 12:31:34.789732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:27.711 [2024-12-16 12:31:34.789776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.054 ms 00:19:27.711 [2024-12-16 12:31:34.789793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.711 [2024-12-16 12:31:34.802478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.711 [2024-12-16 12:31:34.802571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:27.711 [2024-12-16 12:31:34.802614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.612 ms 00:19:27.711 [2024-12-16 12:31:34.802635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.711 [2024-12-16 12:31:34.802820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.711 [2024-12-16 12:31:34.802873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:27.711 [2024-12-16 12:31:34.802894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:19:27.711 [2024-12-16 12:31:34.802910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.970 [2024-12-16 12:31:34.820990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.970 [2024-12-16 12:31:34.821080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:27.970 [2024-12-16 12:31:34.821121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.017 ms 00:19:27.970 [2024-12-16 12:31:34.821138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.970 [2024-12-16 12:31:34.838611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.971 [2024-12-16 12:31:34.838698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:27.971 [2024-12-16 12:31:34.838741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.403 ms 00:19:27.971 [2024-12-16 12:31:34.838757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.971 [2024-12-16 12:31:34.855818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.971 [2024-12-16 12:31:34.855907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:27.971 [2024-12-16 12:31:34.855948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.005 ms 00:19:27.971 [2024-12-16 12:31:34.855964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.971 [2024-12-16 12:31:34.873223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.971 [2024-12-16 12:31:34.873311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:27.971 [2024-12-16 12:31:34.873352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.168 ms 00:19:27.971 [2024-12-16 12:31:34.873369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.971 [2024-12-16 12:31:34.873436] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:27.971 [2024-12-16 12:31:34.873461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.873488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.873511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.873535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.873602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.873629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.873652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.873676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.873698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.873722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.873783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.873809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.873831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.873855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.873877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.873901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.873947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.873976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.874976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.875001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.875025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.875047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.875072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.875153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.875188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.875211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.875237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.875285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.875332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.875358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.875404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.875427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.875450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.875508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.875534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.875558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.875602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.875627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.875676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.875699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.875743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.875767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:27.971 [2024-12-16 12:31:34.875794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.875844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.875870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.875892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.875937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.875961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:27.972 [2024-12-16 12:31:34.876848] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:27.972 [2024-12-16 12:31:34.876869] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f50ddf9-d4f2-4e3f-9574-029fb265c97a 00:19:27.972 [2024-12-16 12:31:34.876891] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:27.972 [2024-12-16 12:31:34.876936] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:27.972 [2024-12-16 12:31:34.876953] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:27.972 [2024-12-16 12:31:34.876973] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:27.972 [2024-12-16 12:31:34.876988] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:27.972 [2024-12-16 12:31:34.877005] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:27.972 [2024-12-16 12:31:34.877012] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:27.972 [2024-12-16 12:31:34.877019] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:27.972 [2024-12-16 12:31:34.877023] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:27.972 [2024-12-16 12:31:34.877032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.972 [2024-12-16 12:31:34.877038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:27.972 [2024-12-16 12:31:34.877046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.598 ms 00:19:27.972 [2024-12-16 12:31:34.877052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.972 [2024-12-16 12:31:34.886941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.972 [2024-12-16 12:31:34.886968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:27.972 [2024-12-16 12:31:34.886979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.843 ms 00:19:27.972 [2024-12-16 12:31:34.886986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.972 [2024-12-16 12:31:34.887303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.972 [2024-12-16 12:31:34.887344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:27.972 [2024-12-16 12:31:34.887355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:19:27.972 [2024-12-16 12:31:34.887361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.972 [2024-12-16 12:31:34.924045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.972 [2024-12-16 12:31:34.924074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:27.972 [2024-12-16 12:31:34.924083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.972 [2024-12-16 12:31:34.924090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.972 [2024-12-16 12:31:34.924180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.972 [2024-12-16 12:31:34.924188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:27.972 [2024-12-16 12:31:34.924197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.972 [2024-12-16 12:31:34.924202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.972 [2024-12-16 12:31:34.924265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.972 [2024-12-16 12:31:34.924273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:27.972 [2024-12-16 12:31:34.924284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.972 [2024-12-16 12:31:34.924290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.972 [2024-12-16 12:31:34.924318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.972 [2024-12-16 12:31:34.924324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:27.972 [2024-12-16 12:31:34.924332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.972 [2024-12-16 12:31:34.924338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.972 [2024-12-16 12:31:34.990798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.972 [2024-12-16 12:31:34.990940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:27.972 [2024-12-16 12:31:34.990957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.972 [2024-12-16 12:31:34.990964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.972 [2024-12-16 12:31:35.042762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.972 [2024-12-16 12:31:35.042881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:27.972 [2024-12-16 12:31:35.042924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.972 [2024-12-16 12:31:35.042943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.972 [2024-12-16 12:31:35.043042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.972 [2024-12-16 12:31:35.043063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:27.972 [2024-12-16 12:31:35.043083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.972 [2024-12-16 12:31:35.043100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.972 [2024-12-16 12:31:35.043167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.972 [2024-12-16 12:31:35.043309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:27.972 [2024-12-16 12:31:35.043328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.972 [2024-12-16 12:31:35.043342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.972 [2024-12-16 12:31:35.043455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.972 [2024-12-16 12:31:35.043476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:27.972 [2024-12-16 12:31:35.043552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.972 [2024-12-16 12:31:35.043569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.972 [2024-12-16 12:31:35.043624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.972 [2024-12-16 12:31:35.043643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:27.972 [2024-12-16 12:31:35.043724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.972 [2024-12-16 12:31:35.043743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.972 [2024-12-16 12:31:35.043809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.972 [2024-12-16 12:31:35.043851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:27.972 [2024-12-16 12:31:35.043909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.972 [2024-12-16 12:31:35.043923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.972 [2024-12-16 12:31:35.043986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.972 [2024-12-16 12:31:35.044005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:27.972 [2024-12-16 12:31:35.044022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.972 [2024-12-16 12:31:35.044036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.973 [2024-12-16 12:31:35.044296] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 284.189 ms, result 0 00:19:27.973 true 00:19:27.973 12:31:35 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78049 00:19:27.973 12:31:35 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78049 ']' 00:19:27.973 12:31:35 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78049 00:19:27.973 12:31:35 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:27.973 12:31:35 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.973 12:31:35 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78049 00:19:28.231 killing process with pid 78049 00:19:28.231 12:31:35 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:28.231 12:31:35 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:28.231 12:31:35 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78049' 00:19:28.231 12:31:35 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78049 00:19:28.231 12:31:35 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78049 00:19:34.788 12:31:40 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:34.788 65536+0 records in 00:19:34.788 65536+0 records out 00:19:34.788 268435456 bytes (268 MB, 256 MiB) copied, 1.07024 s, 251 MB/s 00:19:34.788 12:31:41 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:34.788 [2024-12-16 12:31:41.892097] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:34.788 [2024-12-16 12:31:41.892210] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78231 ] 00:19:35.047 [2024-12-16 12:31:42.043234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.047 [2024-12-16 12:31:42.130605] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.307 [2024-12-16 12:31:42.363981] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:35.307 [2024-12-16 12:31:42.364036] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:35.569 [2024-12-16 12:31:42.517520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.569 [2024-12-16 12:31:42.517555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:35.569 [2024-12-16 12:31:42.517566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:35.569 [2024-12-16 12:31:42.517573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.569 [2024-12-16 12:31:42.519780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.569 [2024-12-16 12:31:42.519912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:35.569 [2024-12-16 12:31:42.519926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.195 ms 00:19:35.569 [2024-12-16 12:31:42.519933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.569 [2024-12-16 12:31:42.520000] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:35.569 [2024-12-16 12:31:42.520611] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:35.569 [2024-12-16 12:31:42.520635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.569 [2024-12-16 12:31:42.520642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:35.569 [2024-12-16 12:31:42.520649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:19:35.569 [2024-12-16 12:31:42.520656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.569 [2024-12-16 12:31:42.522061] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:35.569 [2024-12-16 12:31:42.532453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.569 [2024-12-16 12:31:42.532482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:35.569 [2024-12-16 12:31:42.532491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.392 ms 00:19:35.569 [2024-12-16 12:31:42.532498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.569 [2024-12-16 12:31:42.532576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.569 [2024-12-16 12:31:42.532585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:35.569 [2024-12-16 12:31:42.532592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:35.569 [2024-12-16 12:31:42.532598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.569 [2024-12-16 12:31:42.538991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.569 [2024-12-16 12:31:42.539111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:35.569 [2024-12-16 12:31:42.539123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.359 ms 00:19:35.569 [2024-12-16 12:31:42.539129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.569 [2024-12-16 12:31:42.539221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.569 [2024-12-16 12:31:42.539231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:35.569 [2024-12-16 12:31:42.539237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:35.569 [2024-12-16 12:31:42.539244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.569 [2024-12-16 12:31:42.539273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.569 [2024-12-16 12:31:42.539280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:35.569 [2024-12-16 12:31:42.539287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:35.569 [2024-12-16 12:31:42.539293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.569 [2024-12-16 12:31:42.539311] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:35.569 [2024-12-16 12:31:42.542351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.569 [2024-12-16 12:31:42.542374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:35.569 [2024-12-16 12:31:42.542382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.045 ms 00:19:35.569 [2024-12-16 12:31:42.542388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.569 [2024-12-16 12:31:42.542422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.569 [2024-12-16 12:31:42.542429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:35.569 [2024-12-16 12:31:42.542435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:35.569 [2024-12-16 12:31:42.542441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.569 [2024-12-16 12:31:42.542458] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:35.569 [2024-12-16 12:31:42.542476] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:35.569 [2024-12-16 12:31:42.542504] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:35.569 [2024-12-16 12:31:42.542517] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:35.569 [2024-12-16 12:31:42.542599] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:35.569 [2024-12-16 12:31:42.542608] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:35.569 [2024-12-16 12:31:42.542616] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:35.569 [2024-12-16 12:31:42.542627] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:35.569 [2024-12-16 12:31:42.542634] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:35.569 [2024-12-16 12:31:42.542640] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:35.569 [2024-12-16 12:31:42.542646] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:35.569 [2024-12-16 12:31:42.542652] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:35.569 [2024-12-16 12:31:42.542658] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:35.569 [2024-12-16 12:31:42.542664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.569 [2024-12-16 12:31:42.542671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:35.569 [2024-12-16 12:31:42.542677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:19:35.569 [2024-12-16 12:31:42.542683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.569 [2024-12-16 12:31:42.542761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.569 [2024-12-16 12:31:42.542770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:35.569 [2024-12-16 12:31:42.542776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:35.569 [2024-12-16 12:31:42.542782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.569 [2024-12-16 12:31:42.542860] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:35.569 [2024-12-16 12:31:42.542868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:35.570 [2024-12-16 12:31:42.542875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:35.570 [2024-12-16 12:31:42.542882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.570 [2024-12-16 12:31:42.542889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:35.570 [2024-12-16 12:31:42.542894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:35.570 [2024-12-16 12:31:42.542900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:35.570 [2024-12-16 12:31:42.542906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:35.570 [2024-12-16 12:31:42.542912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:35.570 [2024-12-16 12:31:42.542920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:35.570 [2024-12-16 12:31:42.542926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:35.570 [2024-12-16 12:31:42.542938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:35.570 [2024-12-16 12:31:42.542943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:35.570 [2024-12-16 12:31:42.542948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:35.570 [2024-12-16 12:31:42.542954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:35.570 [2024-12-16 12:31:42.542959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.570 [2024-12-16 12:31:42.542964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:35.570 [2024-12-16 12:31:42.542969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:35.570 [2024-12-16 12:31:42.542974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.570 [2024-12-16 12:31:42.542980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:35.570 [2024-12-16 12:31:42.542985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:35.570 [2024-12-16 12:31:42.542991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.570 [2024-12-16 12:31:42.542996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:35.570 [2024-12-16 12:31:42.543001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:35.570 [2024-12-16 12:31:42.543007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.570 [2024-12-16 12:31:42.543013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:35.570 [2024-12-16 12:31:42.543018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:35.570 [2024-12-16 12:31:42.543023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.570 [2024-12-16 12:31:42.543029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:35.570 [2024-12-16 12:31:42.543034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:35.570 [2024-12-16 12:31:42.543039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.570 [2024-12-16 12:31:42.543044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:35.570 [2024-12-16 12:31:42.543049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:35.570 [2024-12-16 12:31:42.543054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:35.570 [2024-12-16 12:31:42.543059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:35.570 [2024-12-16 12:31:42.543063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:35.570 [2024-12-16 12:31:42.543069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:35.570 [2024-12-16 12:31:42.543075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:35.570 [2024-12-16 12:31:42.543080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:35.570 [2024-12-16 12:31:42.543086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.570 [2024-12-16 12:31:42.543092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:35.570 [2024-12-16 12:31:42.543098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:35.570 [2024-12-16 12:31:42.543104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.570 [2024-12-16 12:31:42.543109] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:35.570 [2024-12-16 12:31:42.543115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:35.570 [2024-12-16 12:31:42.543124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:35.570 [2024-12-16 12:31:42.543130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.570 [2024-12-16 12:31:42.543136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:35.570 [2024-12-16 12:31:42.543141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:35.570 [2024-12-16 12:31:42.543146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:35.570 [2024-12-16 12:31:42.543151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:35.570 [2024-12-16 12:31:42.543173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:35.570 [2024-12-16 12:31:42.543179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:35.570 [2024-12-16 12:31:42.543186] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:35.570 [2024-12-16 12:31:42.543193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:35.570 [2024-12-16 12:31:42.543200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:35.570 [2024-12-16 12:31:42.543206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:35.570 [2024-12-16 12:31:42.543212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:35.570 [2024-12-16 12:31:42.543218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:35.570 [2024-12-16 12:31:42.543224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:35.570 [2024-12-16 12:31:42.543230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:35.570 [2024-12-16 12:31:42.543236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:35.570 [2024-12-16 12:31:42.543241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:35.570 [2024-12-16 12:31:42.543247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:35.570 [2024-12-16 12:31:42.543253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:35.570 [2024-12-16 12:31:42.543258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:35.570 [2024-12-16 12:31:42.543264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:35.570 [2024-12-16 12:31:42.543270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:35.570 [2024-12-16 12:31:42.543276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:35.570 [2024-12-16 12:31:42.543281] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:35.570 [2024-12-16 12:31:42.543295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:35.570 [2024-12-16 12:31:42.543301] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:35.570 [2024-12-16 12:31:42.543306] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:35.570 [2024-12-16 12:31:42.543314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:35.570 [2024-12-16 12:31:42.543319] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:35.570 [2024-12-16 12:31:42.543325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.570 [2024-12-16 12:31:42.543334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:35.570 [2024-12-16 12:31:42.543341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:19:35.570 [2024-12-16 12:31:42.543347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.570 [2024-12-16 12:31:42.567656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.570 [2024-12-16 12:31:42.567684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:35.570 [2024-12-16 12:31:42.567694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.255 ms 00:19:35.570 [2024-12-16 12:31:42.567700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.570 [2024-12-16 12:31:42.567799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.570 [2024-12-16 12:31:42.567807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:35.570 [2024-12-16 12:31:42.567814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:35.570 [2024-12-16 12:31:42.567820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.570 [2024-12-16 12:31:42.611006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.570 [2024-12-16 12:31:42.611040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:35.570 [2024-12-16 12:31:42.611052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.167 ms 00:19:35.570 [2024-12-16 12:31:42.611058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.570 [2024-12-16 12:31:42.611135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.570 [2024-12-16 12:31:42.611144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:35.570 [2024-12-16 12:31:42.611151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:35.570 [2024-12-16 12:31:42.611174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.570 [2024-12-16 12:31:42.611558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.570 [2024-12-16 12:31:42.611571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:35.570 [2024-12-16 12:31:42.611578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:19:35.570 [2024-12-16 12:31:42.611588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.570 [2024-12-16 12:31:42.611704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.570 [2024-12-16 12:31:42.611713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:35.570 [2024-12-16 12:31:42.611719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:19:35.570 [2024-12-16 12:31:42.611725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.570 [2024-12-16 12:31:42.623968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.570 [2024-12-16 12:31:42.623996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:35.570 [2024-12-16 12:31:42.624004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.226 ms 00:19:35.571 [2024-12-16 12:31:42.624011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.571 [2024-12-16 12:31:42.634584] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:35.571 [2024-12-16 12:31:42.634614] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:35.571 [2024-12-16 12:31:42.634624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.571 [2024-12-16 12:31:42.634631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:35.571 [2024-12-16 12:31:42.634638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.532 ms 00:19:35.571 [2024-12-16 12:31:42.634644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.571 [2024-12-16 12:31:42.653275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.571 [2024-12-16 12:31:42.653303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:35.571 [2024-12-16 12:31:42.653313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.572 ms 00:19:35.571 [2024-12-16 12:31:42.653320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.571 [2024-12-16 12:31:42.662235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.571 [2024-12-16 12:31:42.662260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:35.571 [2024-12-16 12:31:42.662269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.871 ms 00:19:35.571 [2024-12-16 12:31:42.662275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.571 [2024-12-16 12:31:42.671496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.571 [2024-12-16 12:31:42.671521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:35.571 [2024-12-16 12:31:42.671529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.179 ms 00:19:35.571 [2024-12-16 12:31:42.671534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.571 [2024-12-16 12:31:42.672038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.571 [2024-12-16 12:31:42.672055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:35.571 [2024-12-16 12:31:42.672063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:19:35.571 [2024-12-16 12:31:42.672070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.830 [2024-12-16 12:31:42.720113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.830 [2024-12-16 12:31:42.720144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:35.830 [2024-12-16 12:31:42.720166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.025 ms 00:19:35.830 [2024-12-16 12:31:42.720173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.830 [2024-12-16 12:31:42.728487] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:35.830 [2024-12-16 12:31:42.742938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.830 [2024-12-16 12:31:42.742969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:35.830 [2024-12-16 12:31:42.742979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.689 ms 00:19:35.830 [2024-12-16 12:31:42.742986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.830 [2024-12-16 12:31:42.743062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.830 [2024-12-16 12:31:42.743070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:35.830 [2024-12-16 12:31:42.743078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:35.830 [2024-12-16 12:31:42.743084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.830 [2024-12-16 12:31:42.743129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.830 [2024-12-16 12:31:42.743136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:35.830 [2024-12-16 12:31:42.743144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:35.830 [2024-12-16 12:31:42.743151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.830 [2024-12-16 12:31:42.743201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.830 [2024-12-16 12:31:42.743211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:35.830 [2024-12-16 12:31:42.743217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:35.830 [2024-12-16 12:31:42.743224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.830 [2024-12-16 12:31:42.743253] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:35.830 [2024-12-16 12:31:42.743261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.830 [2024-12-16 12:31:42.743267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:35.830 [2024-12-16 12:31:42.743274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:35.830 [2024-12-16 12:31:42.743280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.830 [2024-12-16 12:31:42.762641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.830 [2024-12-16 12:31:42.762670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:35.830 [2024-12-16 12:31:42.762679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.343 ms 00:19:35.830 [2024-12-16 12:31:42.762685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.830 [2024-12-16 12:31:42.762762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.830 [2024-12-16 12:31:42.762770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:35.830 [2024-12-16 12:31:42.762777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:35.830 [2024-12-16 12:31:42.762783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.830 [2024-12-16 12:31:42.763678] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:35.830 [2024-12-16 12:31:42.766129] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 245.912 ms, result 0 00:19:35.830 [2024-12-16 12:31:42.766984] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:35.830 [2024-12-16 12:31:42.777926] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:36.766  [2024-12-16T12:31:44.813Z] Copying: 26/256 [MB] (26 MBps) [2024-12-16T12:31:46.193Z] Copying: 47/256 [MB] (20 MBps) [2024-12-16T12:31:47.136Z] Copying: 66/256 [MB] (19 MBps) [2024-12-16T12:31:48.080Z] Copying: 78/256 [MB] (12 MBps) [2024-12-16T12:31:49.024Z] Copying: 100/256 [MB] (22 MBps) [2024-12-16T12:31:49.968Z] Copying: 123/256 [MB] (22 MBps) [2024-12-16T12:31:50.913Z] Copying: 144/256 [MB] (21 MBps) [2024-12-16T12:31:51.857Z] Copying: 165/256 [MB] (21 MBps) [2024-12-16T12:31:52.799Z] Copying: 184/256 [MB] (19 MBps) [2024-12-16T12:31:54.184Z] Copying: 199/256 [MB] (14 MBps) [2024-12-16T12:31:55.126Z] Copying: 214256/262144 [kB] (10212 kBps) [2024-12-16T12:31:56.068Z] Copying: 219/256 [MB] (10 MBps) [2024-12-16T12:31:57.009Z] Copying: 231/256 [MB] (11 MBps) [2024-12-16T12:31:57.009Z] Copying: 253/256 [MB] (22 MBps) [2024-12-16T12:31:57.009Z] Copying: 256/256 [MB] (average 18 MBps)[2024-12-16 12:31:56.963358] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:49.903 [2024-12-16 12:31:56.971015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.903 [2024-12-16 12:31:56.971127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:49.903 [2024-12-16 12:31:56.971145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:49.903 [2024-12-16 12:31:56.971153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.903 [2024-12-16 12:31:56.971189] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:49.903 [2024-12-16 12:31:56.973459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.903 [2024-12-16 12:31:56.973483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:49.903 [2024-12-16 12:31:56.973493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.259 ms 00:19:49.903 [2024-12-16 12:31:56.973499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.903 [2024-12-16 12:31:56.976051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.904 [2024-12-16 12:31:56.976076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:49.904 [2024-12-16 12:31:56.976085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.532 ms 00:19:49.904 [2024-12-16 12:31:56.976092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.904 [2024-12-16 12:31:56.982715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.904 [2024-12-16 12:31:56.982744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:49.904 [2024-12-16 12:31:56.982752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.611 ms 00:19:49.904 [2024-12-16 12:31:56.982758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.904 [2024-12-16 12:31:56.988000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.904 [2024-12-16 12:31:56.988021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:49.904 [2024-12-16 12:31:56.988030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.207 ms 00:19:49.904 [2024-12-16 12:31:56.988038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.904 [2024-12-16 12:31:57.006186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.904 [2024-12-16 12:31:57.006212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:49.904 [2024-12-16 12:31:57.006221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.120 ms 00:19:49.904 [2024-12-16 12:31:57.006227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.165 [2024-12-16 12:31:57.017553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.165 [2024-12-16 12:31:57.017665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:50.165 [2024-12-16 12:31:57.017682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.297 ms 00:19:50.165 [2024-12-16 12:31:57.017689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.165 [2024-12-16 12:31:57.017779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.165 [2024-12-16 12:31:57.017787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:50.165 [2024-12-16 12:31:57.017794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:50.165 [2024-12-16 12:31:57.017806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.165 [2024-12-16 12:31:57.036738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.165 [2024-12-16 12:31:57.036839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:50.165 [2024-12-16 12:31:57.036850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.920 ms 00:19:50.165 [2024-12-16 12:31:57.036856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.165 [2024-12-16 12:31:57.055087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.165 [2024-12-16 12:31:57.055110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:50.165 [2024-12-16 12:31:57.055118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.206 ms 00:19:50.165 [2024-12-16 12:31:57.055124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.165 [2024-12-16 12:31:57.072860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.165 [2024-12-16 12:31:57.072885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:50.165 [2024-12-16 12:31:57.072892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.709 ms 00:19:50.165 [2024-12-16 12:31:57.072898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.165 [2024-12-16 12:31:57.090532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.165 [2024-12-16 12:31:57.090627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:50.165 [2024-12-16 12:31:57.090639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.585 ms 00:19:50.165 [2024-12-16 12:31:57.090644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.165 [2024-12-16 12:31:57.090668] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:50.165 [2024-12-16 12:31:57.090679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:50.165 [2024-12-16 12:31:57.090687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.090997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:50.166 [2024-12-16 12:31:57.091231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:50.167 [2024-12-16 12:31:57.091237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:50.167 [2024-12-16 12:31:57.091248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:50.167 [2024-12-16 12:31:57.091254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:50.167 [2024-12-16 12:31:57.091260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:50.167 [2024-12-16 12:31:57.091266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:50.167 [2024-12-16 12:31:57.091271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:50.167 [2024-12-16 12:31:57.091281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:50.167 [2024-12-16 12:31:57.091293] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:50.167 [2024-12-16 12:31:57.091299] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f50ddf9-d4f2-4e3f-9574-029fb265c97a 00:19:50.167 [2024-12-16 12:31:57.091307] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:50.167 [2024-12-16 12:31:57.091312] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:50.167 [2024-12-16 12:31:57.091318] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:50.167 [2024-12-16 12:31:57.091325] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:50.167 [2024-12-16 12:31:57.091331] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:50.167 [2024-12-16 12:31:57.091338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:50.167 [2024-12-16 12:31:57.091343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:50.167 [2024-12-16 12:31:57.091349] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:50.167 [2024-12-16 12:31:57.091354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:50.167 [2024-12-16 12:31:57.091359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.167 [2024-12-16 12:31:57.091367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:50.167 [2024-12-16 12:31:57.091374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:19:50.167 [2024-12-16 12:31:57.091379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-12-16 12:31:57.101475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.167 [2024-12-16 12:31:57.101565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:50.167 [2024-12-16 12:31:57.101576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.083 ms 00:19:50.167 [2024-12-16 12:31:57.101581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-12-16 12:31:57.101880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.167 [2024-12-16 12:31:57.101888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:50.167 [2024-12-16 12:31:57.101895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:19:50.167 [2024-12-16 12:31:57.101900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-12-16 12:31:57.131202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.167 [2024-12-16 12:31:57.131230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:50.167 [2024-12-16 12:31:57.131238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.167 [2024-12-16 12:31:57.131244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-12-16 12:31:57.131319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.167 [2024-12-16 12:31:57.131327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:50.167 [2024-12-16 12:31:57.131333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.167 [2024-12-16 12:31:57.131339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-12-16 12:31:57.131373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.167 [2024-12-16 12:31:57.131381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:50.167 [2024-12-16 12:31:57.131386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.167 [2024-12-16 12:31:57.131392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-12-16 12:31:57.131405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.167 [2024-12-16 12:31:57.131414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:50.167 [2024-12-16 12:31:57.131420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.167 [2024-12-16 12:31:57.131425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-12-16 12:31:57.194263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.167 [2024-12-16 12:31:57.194298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:50.167 [2024-12-16 12:31:57.194308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.167 [2024-12-16 12:31:57.194315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-12-16 12:31:57.246409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.167 [2024-12-16 12:31:57.246441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:50.167 [2024-12-16 12:31:57.246450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.167 [2024-12-16 12:31:57.246457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-12-16 12:31:57.246516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.167 [2024-12-16 12:31:57.246524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:50.167 [2024-12-16 12:31:57.246531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.167 [2024-12-16 12:31:57.246537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-12-16 12:31:57.246562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.167 [2024-12-16 12:31:57.246568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:50.167 [2024-12-16 12:31:57.246578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.167 [2024-12-16 12:31:57.246585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-12-16 12:31:57.246660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.167 [2024-12-16 12:31:57.246669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:50.167 [2024-12-16 12:31:57.246675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.167 [2024-12-16 12:31:57.246682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-12-16 12:31:57.246707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.167 [2024-12-16 12:31:57.246715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:50.167 [2024-12-16 12:31:57.246721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.167 [2024-12-16 12:31:57.246730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-12-16 12:31:57.246766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.167 [2024-12-16 12:31:57.246774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:50.167 [2024-12-16 12:31:57.246780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.167 [2024-12-16 12:31:57.246786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-12-16 12:31:57.246824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.167 [2024-12-16 12:31:57.246833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:50.167 [2024-12-16 12:31:57.246842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.167 [2024-12-16 12:31:57.246848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-12-16 12:31:57.246974] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 275.951 ms, result 0 00:19:51.110 00:19:51.110 00:19:51.110 12:31:58 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78408 00:19:51.110 12:31:58 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78408 00:19:51.110 12:31:58 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:51.110 12:31:58 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78408 ']' 00:19:51.110 12:31:58 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.110 12:31:58 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:51.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.110 12:31:58 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.110 12:31:58 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:51.110 12:31:58 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:51.110 [2024-12-16 12:31:58.143264] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:51.110 [2024-12-16 12:31:58.143385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78408 ] 00:19:51.371 [2024-12-16 12:31:58.299597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.371 [2024-12-16 12:31:58.388850] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.944 12:31:58 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.944 12:31:58 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:51.944 12:31:58 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:52.204 [2024-12-16 12:31:59.184853] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:52.204 [2024-12-16 12:31:59.184905] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:52.466 [2024-12-16 12:31:59.357955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.466 [2024-12-16 12:31:59.357991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:52.466 [2024-12-16 12:31:59.358004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:52.466 [2024-12-16 12:31:59.358011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.466 [2024-12-16 12:31:59.360450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.466 [2024-12-16 12:31:59.360588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:52.466 [2024-12-16 12:31:59.360605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.422 ms 00:19:52.466 [2024-12-16 12:31:59.360612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.466 [2024-12-16 12:31:59.360785] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:52.466 [2024-12-16 12:31:59.361374] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:52.466 [2024-12-16 12:31:59.361402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.466 [2024-12-16 12:31:59.361409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:52.466 [2024-12-16 12:31:59.361418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:19:52.466 [2024-12-16 12:31:59.361424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.466 [2024-12-16 12:31:59.362823] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:52.466 [2024-12-16 12:31:59.373313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.466 [2024-12-16 12:31:59.373344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:52.466 [2024-12-16 12:31:59.373353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.494 ms 00:19:52.466 [2024-12-16 12:31:59.373361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.466 [2024-12-16 12:31:59.373447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.466 [2024-12-16 12:31:59.373461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:52.466 [2024-12-16 12:31:59.373469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:52.466 [2024-12-16 12:31:59.373479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.466 [2024-12-16 12:31:59.379811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.466 [2024-12-16 12:31:59.379935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:52.466 [2024-12-16 12:31:59.379948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.287 ms 00:19:52.466 [2024-12-16 12:31:59.379956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.466 [2024-12-16 12:31:59.380038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.466 [2024-12-16 12:31:59.380048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:52.466 [2024-12-16 12:31:59.380054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:52.466 [2024-12-16 12:31:59.380067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.466 [2024-12-16 12:31:59.380088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.466 [2024-12-16 12:31:59.380097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:52.466 [2024-12-16 12:31:59.380103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:52.466 [2024-12-16 12:31:59.380110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.466 [2024-12-16 12:31:59.380127] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:52.467 [2024-12-16 12:31:59.383174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.467 [2024-12-16 12:31:59.383196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:52.467 [2024-12-16 12:31:59.383205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.050 ms 00:19:52.467 [2024-12-16 12:31:59.383211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.467 [2024-12-16 12:31:59.383244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.467 [2024-12-16 12:31:59.383251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:52.467 [2024-12-16 12:31:59.383259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:52.467 [2024-12-16 12:31:59.383267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.467 [2024-12-16 12:31:59.383284] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:52.467 [2024-12-16 12:31:59.383301] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:52.467 [2024-12-16 12:31:59.383338] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:52.467 [2024-12-16 12:31:59.383351] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:52.467 [2024-12-16 12:31:59.383435] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:52.467 [2024-12-16 12:31:59.383444] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:52.467 [2024-12-16 12:31:59.383455] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:52.467 [2024-12-16 12:31:59.383463] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:52.467 [2024-12-16 12:31:59.383471] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:52.467 [2024-12-16 12:31:59.383478] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:52.467 [2024-12-16 12:31:59.383485] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:52.467 [2024-12-16 12:31:59.383492] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:52.467 [2024-12-16 12:31:59.383501] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:52.467 [2024-12-16 12:31:59.383508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.467 [2024-12-16 12:31:59.383515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:52.467 [2024-12-16 12:31:59.383521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:19:52.467 [2024-12-16 12:31:59.383528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.467 [2024-12-16 12:31:59.383606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.467 [2024-12-16 12:31:59.383615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:52.467 [2024-12-16 12:31:59.383622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:52.467 [2024-12-16 12:31:59.383629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.467 [2024-12-16 12:31:59.383706] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:52.467 [2024-12-16 12:31:59.383716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:52.467 [2024-12-16 12:31:59.383723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:52.467 [2024-12-16 12:31:59.383731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.467 [2024-12-16 12:31:59.383737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:52.467 [2024-12-16 12:31:59.383744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:52.467 [2024-12-16 12:31:59.383750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:52.467 [2024-12-16 12:31:59.383760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:52.467 [2024-12-16 12:31:59.383765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:52.467 [2024-12-16 12:31:59.383772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:52.467 [2024-12-16 12:31:59.383777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:52.467 [2024-12-16 12:31:59.383784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:52.467 [2024-12-16 12:31:59.383790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:52.467 [2024-12-16 12:31:59.383799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:52.467 [2024-12-16 12:31:59.383805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:52.467 [2024-12-16 12:31:59.383811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.467 [2024-12-16 12:31:59.383817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:52.467 [2024-12-16 12:31:59.383824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:52.467 [2024-12-16 12:31:59.383833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.467 [2024-12-16 12:31:59.383839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:52.467 [2024-12-16 12:31:59.383845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:52.467 [2024-12-16 12:31:59.383852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.467 [2024-12-16 12:31:59.383857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:52.467 [2024-12-16 12:31:59.383864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:52.467 [2024-12-16 12:31:59.383869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.467 [2024-12-16 12:31:59.383876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:52.467 [2024-12-16 12:31:59.383881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:52.467 [2024-12-16 12:31:59.383888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.467 [2024-12-16 12:31:59.383893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:52.467 [2024-12-16 12:31:59.383901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:52.467 [2024-12-16 12:31:59.383905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.467 [2024-12-16 12:31:59.383912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:52.467 [2024-12-16 12:31:59.383917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:52.467 [2024-12-16 12:31:59.383923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:52.467 [2024-12-16 12:31:59.383928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:52.467 [2024-12-16 12:31:59.383935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:52.467 [2024-12-16 12:31:59.383939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:52.467 [2024-12-16 12:31:59.383946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:52.467 [2024-12-16 12:31:59.383950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:52.467 [2024-12-16 12:31:59.383958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.467 [2024-12-16 12:31:59.383963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:52.467 [2024-12-16 12:31:59.383970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:52.467 [2024-12-16 12:31:59.383975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.467 [2024-12-16 12:31:59.383981] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:52.467 [2024-12-16 12:31:59.383989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:52.467 [2024-12-16 12:31:59.383998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:52.467 [2024-12-16 12:31:59.384004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.467 [2024-12-16 12:31:59.384012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:52.467 [2024-12-16 12:31:59.384017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:52.467 [2024-12-16 12:31:59.384024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:52.467 [2024-12-16 12:31:59.384029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:52.467 [2024-12-16 12:31:59.384035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:52.467 [2024-12-16 12:31:59.384041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:52.467 [2024-12-16 12:31:59.384049] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:52.467 [2024-12-16 12:31:59.384056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:52.467 [2024-12-16 12:31:59.384073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:52.467 [2024-12-16 12:31:59.384078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:52.467 [2024-12-16 12:31:59.384086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:52.467 [2024-12-16 12:31:59.384091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:52.467 [2024-12-16 12:31:59.384098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:52.467 [2024-12-16 12:31:59.384103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:52.467 [2024-12-16 12:31:59.384109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:52.467 [2024-12-16 12:31:59.384115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:52.467 [2024-12-16 12:31:59.384121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:52.467 [2024-12-16 12:31:59.384127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:52.467 [2024-12-16 12:31:59.384134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:52.468 [2024-12-16 12:31:59.384139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:52.468 [2024-12-16 12:31:59.384146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:52.468 [2024-12-16 12:31:59.384151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:52.468 [2024-12-16 12:31:59.384173] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:52.468 [2024-12-16 12:31:59.384180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:52.468 [2024-12-16 12:31:59.384191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:52.468 [2024-12-16 12:31:59.384197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:52.468 [2024-12-16 12:31:59.384204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:52.468 [2024-12-16 12:31:59.384209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:52.468 [2024-12-16 12:31:59.384217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.468 [2024-12-16 12:31:59.384223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:52.468 [2024-12-16 12:31:59.384233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:19:52.468 [2024-12-16 12:31:59.384241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.468 [2024-12-16 12:31:59.408544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.468 [2024-12-16 12:31:59.408573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:52.468 [2024-12-16 12:31:59.408584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.244 ms 00:19:52.468 [2024-12-16 12:31:59.408593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.468 [2024-12-16 12:31:59.408686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.468 [2024-12-16 12:31:59.408694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:52.468 [2024-12-16 12:31:59.408702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:52.468 [2024-12-16 12:31:59.408709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.468 [2024-12-16 12:31:59.435148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.468 [2024-12-16 12:31:59.435180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:52.468 [2024-12-16 12:31:59.435190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.421 ms 00:19:52.468 [2024-12-16 12:31:59.435197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.468 [2024-12-16 12:31:59.435243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.468 [2024-12-16 12:31:59.435251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:52.468 [2024-12-16 12:31:59.435259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:52.468 [2024-12-16 12:31:59.435265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.468 [2024-12-16 12:31:59.435649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.468 [2024-12-16 12:31:59.435669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:52.468 [2024-12-16 12:31:59.435680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:19:52.468 [2024-12-16 12:31:59.435686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.468 [2024-12-16 12:31:59.435798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.468 [2024-12-16 12:31:59.435805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:52.468 [2024-12-16 12:31:59.435812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:19:52.468 [2024-12-16 12:31:59.435818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.468 [2024-12-16 12:31:59.449278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.468 [2024-12-16 12:31:59.449302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:52.468 [2024-12-16 12:31:59.449312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.440 ms 00:19:52.468 [2024-12-16 12:31:59.449317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.468 [2024-12-16 12:31:59.472226] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:52.468 [2024-12-16 12:31:59.472258] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:52.468 [2024-12-16 12:31:59.472272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.468 [2024-12-16 12:31:59.472279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:52.468 [2024-12-16 12:31:59.472288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.849 ms 00:19:52.468 [2024-12-16 12:31:59.472299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.468 [2024-12-16 12:31:59.491349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.468 [2024-12-16 12:31:59.491377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:52.468 [2024-12-16 12:31:59.491389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.987 ms 00:19:52.468 [2024-12-16 12:31:59.491396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.468 [2024-12-16 12:31:59.500604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.468 [2024-12-16 12:31:59.500629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:52.468 [2024-12-16 12:31:59.500640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.147 ms 00:19:52.468 [2024-12-16 12:31:59.500646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.468 [2024-12-16 12:31:59.509668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.468 [2024-12-16 12:31:59.509788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:52.468 [2024-12-16 12:31:59.509805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.977 ms 00:19:52.468 [2024-12-16 12:31:59.509811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.468 [2024-12-16 12:31:59.510306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.468 [2024-12-16 12:31:59.510319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:52.468 [2024-12-16 12:31:59.510328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:19:52.468 [2024-12-16 12:31:59.510336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.468 [2024-12-16 12:31:59.558408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.468 [2024-12-16 12:31:59.558439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:52.468 [2024-12-16 12:31:59.558451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.052 ms 00:19:52.468 [2024-12-16 12:31:59.558458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.468 [2024-12-16 12:31:59.567049] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:52.730 [2024-12-16 12:31:59.581425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.730 [2024-12-16 12:31:59.581458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:52.730 [2024-12-16 12:31:59.581470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.894 ms 00:19:52.730 [2024-12-16 12:31:59.581478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.730 [2024-12-16 12:31:59.581543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.730 [2024-12-16 12:31:59.581553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:52.730 [2024-12-16 12:31:59.581560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:52.730 [2024-12-16 12:31:59.581568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.730 [2024-12-16 12:31:59.581614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.730 [2024-12-16 12:31:59.581624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:52.730 [2024-12-16 12:31:59.581630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:52.730 [2024-12-16 12:31:59.581640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.730 [2024-12-16 12:31:59.581660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.730 [2024-12-16 12:31:59.581669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:52.730 [2024-12-16 12:31:59.581675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:52.730 [2024-12-16 12:31:59.581684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.730 [2024-12-16 12:31:59.581714] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:52.730 [2024-12-16 12:31:59.581724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.730 [2024-12-16 12:31:59.581733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:52.730 [2024-12-16 12:31:59.581740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:52.730 [2024-12-16 12:31:59.581746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.730 [2024-12-16 12:31:59.601013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.730 [2024-12-16 12:31:59.601139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:52.730 [2024-12-16 12:31:59.601169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.242 ms 00:19:52.730 [2024-12-16 12:31:59.601177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.730 [2024-12-16 12:31:59.601252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.730 [2024-12-16 12:31:59.601261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:52.730 [2024-12-16 12:31:59.601271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:52.730 [2024-12-16 12:31:59.601280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.730 [2024-12-16 12:31:59.602088] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:52.730 [2024-12-16 12:31:59.604364] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 243.883 ms, result 0 00:19:52.730 [2024-12-16 12:31:59.605721] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:52.730 Some configs were skipped because the RPC state that can call them passed over. 00:19:52.730 12:31:59 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:52.730 [2024-12-16 12:31:59.831196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.730 [2024-12-16 12:31:59.831304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:52.730 [2024-12-16 12:31:59.831352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.758 ms 00:19:52.730 [2024-12-16 12:31:59.831373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.730 [2024-12-16 12:31:59.831429] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.990 ms, result 0 00:19:52.730 true 00:19:52.991 12:31:59 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:52.991 [2024-12-16 12:32:00.026999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.991 [2024-12-16 12:32:00.027103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:52.991 [2024-12-16 12:32:00.027150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.378 ms 00:19:52.991 [2024-12-16 12:32:00.027179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.991 [2024-12-16 12:32:00.027222] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.601 ms, result 0 00:19:52.991 true 00:19:52.991 12:32:00 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78408 00:19:52.991 12:32:00 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78408 ']' 00:19:52.991 12:32:00 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78408 00:19:52.991 12:32:00 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:52.991 12:32:00 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.991 12:32:00 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78408 00:19:52.991 killing process with pid 78408 00:19:52.991 12:32:00 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:52.991 12:32:00 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:52.991 12:32:00 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78408' 00:19:52.991 12:32:00 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78408 00:19:52.991 12:32:00 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78408 00:19:53.563 [2024-12-16 12:32:00.638496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.563 [2024-12-16 12:32:00.638544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:53.563 [2024-12-16 12:32:00.638556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:53.563 [2024-12-16 12:32:00.638565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.563 [2024-12-16 12:32:00.638585] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:53.563 [2024-12-16 12:32:00.640723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.563 [2024-12-16 12:32:00.640749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:53.563 [2024-12-16 12:32:00.640761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.122 ms 00:19:53.563 [2024-12-16 12:32:00.640767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.563 [2024-12-16 12:32:00.641005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.563 [2024-12-16 12:32:00.641013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:53.563 [2024-12-16 12:32:00.641021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:19:53.563 [2024-12-16 12:32:00.641027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.563 [2024-12-16 12:32:00.644648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.563 [2024-12-16 12:32:00.644788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:53.563 [2024-12-16 12:32:00.644807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.602 ms 00:19:53.563 [2024-12-16 12:32:00.644813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.563 [2024-12-16 12:32:00.650091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.563 [2024-12-16 12:32:00.650115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:53.563 [2024-12-16 12:32:00.650124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.242 ms 00:19:53.563 [2024-12-16 12:32:00.650130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.563 [2024-12-16 12:32:00.658202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.563 [2024-12-16 12:32:00.658232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:53.563 [2024-12-16 12:32:00.658244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.015 ms 00:19:53.563 [2024-12-16 12:32:00.658250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.563 [2024-12-16 12:32:00.665255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.563 [2024-12-16 12:32:00.665281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:53.563 [2024-12-16 12:32:00.665291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.972 ms 00:19:53.563 [2024-12-16 12:32:00.665298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.563 [2024-12-16 12:32:00.665420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.563 [2024-12-16 12:32:00.665429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:53.563 [2024-12-16 12:32:00.665437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:19:53.563 [2024-12-16 12:32:00.665444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.825 [2024-12-16 12:32:00.674253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.825 [2024-12-16 12:32:00.674275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:53.825 [2024-12-16 12:32:00.674285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.791 ms 00:19:53.825 [2024-12-16 12:32:00.674290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.825 [2024-12-16 12:32:00.682324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.825 [2024-12-16 12:32:00.682347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:53.825 [2024-12-16 12:32:00.682357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.002 ms 00:19:53.825 [2024-12-16 12:32:00.682363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.825 [2024-12-16 12:32:00.690013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.825 [2024-12-16 12:32:00.690046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:53.825 [2024-12-16 12:32:00.690055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.616 ms 00:19:53.825 [2024-12-16 12:32:00.690061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.825 [2024-12-16 12:32:00.697674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.825 [2024-12-16 12:32:00.697697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:53.825 [2024-12-16 12:32:00.697706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.548 ms 00:19:53.825 [2024-12-16 12:32:00.697711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.825 [2024-12-16 12:32:00.697740] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:53.825 [2024-12-16 12:32:00.697752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:53.825 [2024-12-16 12:32:00.697762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.697994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:53.826 [2024-12-16 12:32:00.698354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:53.827 [2024-12-16 12:32:00.698360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:53.827 [2024-12-16 12:32:00.698367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:53.827 [2024-12-16 12:32:00.698373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:53.827 [2024-12-16 12:32:00.698379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:53.827 [2024-12-16 12:32:00.698386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:53.827 [2024-12-16 12:32:00.698393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:53.827 [2024-12-16 12:32:00.698399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:53.827 [2024-12-16 12:32:00.698407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:53.827 [2024-12-16 12:32:00.698413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:53.827 [2024-12-16 12:32:00.698421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:53.827 [2024-12-16 12:32:00.698427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:53.827 [2024-12-16 12:32:00.698434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:53.827 [2024-12-16 12:32:00.698451] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:53.827 [2024-12-16 12:32:00.698462] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f50ddf9-d4f2-4e3f-9574-029fb265c97a 00:19:53.827 [2024-12-16 12:32:00.698470] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:53.827 [2024-12-16 12:32:00.698477] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:53.827 [2024-12-16 12:32:00.698484] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:53.827 [2024-12-16 12:32:00.698492] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:53.827 [2024-12-16 12:32:00.698498] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:53.827 [2024-12-16 12:32:00.698506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:53.827 [2024-12-16 12:32:00.698512] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:53.827 [2024-12-16 12:32:00.698518] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:53.827 [2024-12-16 12:32:00.698523] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:53.827 [2024-12-16 12:32:00.698530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.827 [2024-12-16 12:32:00.698536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:53.827 [2024-12-16 12:32:00.698543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:19:53.827 [2024-12-16 12:32:00.698550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.827 [2024-12-16 12:32:00.708871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.827 [2024-12-16 12:32:00.708896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:53.827 [2024-12-16 12:32:00.708908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.301 ms 00:19:53.827 [2024-12-16 12:32:00.708914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.827 [2024-12-16 12:32:00.709252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.827 [2024-12-16 12:32:00.709266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:53.827 [2024-12-16 12:32:00.709277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:19:53.827 [2024-12-16 12:32:00.709283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.827 [2024-12-16 12:32:00.746114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.827 [2024-12-16 12:32:00.746141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:53.827 [2024-12-16 12:32:00.746150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.827 [2024-12-16 12:32:00.746168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.827 [2024-12-16 12:32:00.746255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.827 [2024-12-16 12:32:00.746263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:53.827 [2024-12-16 12:32:00.746273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.827 [2024-12-16 12:32:00.746279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.827 [2024-12-16 12:32:00.746317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.827 [2024-12-16 12:32:00.746325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:53.827 [2024-12-16 12:32:00.746334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.827 [2024-12-16 12:32:00.746341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.827 [2024-12-16 12:32:00.746357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.827 [2024-12-16 12:32:00.746364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:53.827 [2024-12-16 12:32:00.746371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.827 [2024-12-16 12:32:00.746378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.827 [2024-12-16 12:32:00.809008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.827 [2024-12-16 12:32:00.809043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:53.827 [2024-12-16 12:32:00.809055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.827 [2024-12-16 12:32:00.809062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.827 [2024-12-16 12:32:00.860230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.827 [2024-12-16 12:32:00.860265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:53.827 [2024-12-16 12:32:00.860276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.827 [2024-12-16 12:32:00.860285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.827 [2024-12-16 12:32:00.860357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.827 [2024-12-16 12:32:00.860365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:53.827 [2024-12-16 12:32:00.860375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.827 [2024-12-16 12:32:00.860381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.827 [2024-12-16 12:32:00.860408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.827 [2024-12-16 12:32:00.860415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:53.827 [2024-12-16 12:32:00.860423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.827 [2024-12-16 12:32:00.860429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.827 [2024-12-16 12:32:00.860510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.827 [2024-12-16 12:32:00.860518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:53.827 [2024-12-16 12:32:00.860526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.827 [2024-12-16 12:32:00.860532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.827 [2024-12-16 12:32:00.860560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.827 [2024-12-16 12:32:00.860567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:53.827 [2024-12-16 12:32:00.860575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.827 [2024-12-16 12:32:00.860582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.827 [2024-12-16 12:32:00.860620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.827 [2024-12-16 12:32:00.860627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:53.827 [2024-12-16 12:32:00.860637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.827 [2024-12-16 12:32:00.860643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.827 [2024-12-16 12:32:00.860685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.827 [2024-12-16 12:32:00.860693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:53.827 [2024-12-16 12:32:00.860701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.827 [2024-12-16 12:32:00.860707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.827 [2024-12-16 12:32:00.860835] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 222.314 ms, result 0 00:19:54.399 12:32:01 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:54.399 12:32:01 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:54.399 [2024-12-16 12:32:01.484105] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:54.399 [2024-12-16 12:32:01.484240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78455 ] 00:19:54.659 [2024-12-16 12:32:01.639660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.659 [2024-12-16 12:32:01.731054] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.920 [2024-12-16 12:32:01.964354] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:54.920 [2024-12-16 12:32:01.964587] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:55.183 [2024-12-16 12:32:02.120851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.183 [2024-12-16 12:32:02.120888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:55.183 [2024-12-16 12:32:02.120899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:55.183 [2024-12-16 12:32:02.120906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.183 [2024-12-16 12:32:02.123166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.183 [2024-12-16 12:32:02.123193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:55.183 [2024-12-16 12:32:02.123202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.236 ms 00:19:55.183 [2024-12-16 12:32:02.123207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.183 [2024-12-16 12:32:02.123273] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:55.183 [2024-12-16 12:32:02.123795] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:55.183 [2024-12-16 12:32:02.123813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.183 [2024-12-16 12:32:02.123820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:55.183 [2024-12-16 12:32:02.123828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:19:55.183 [2024-12-16 12:32:02.123834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.183 [2024-12-16 12:32:02.125127] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:55.183 [2024-12-16 12:32:02.135791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.183 [2024-12-16 12:32:02.135818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:55.183 [2024-12-16 12:32:02.135827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.666 ms 00:19:55.183 [2024-12-16 12:32:02.135833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.183 [2024-12-16 12:32:02.135907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.183 [2024-12-16 12:32:02.135917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:55.183 [2024-12-16 12:32:02.135923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:55.183 [2024-12-16 12:32:02.135929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.183 [2024-12-16 12:32:02.142188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.183 [2024-12-16 12:32:02.142332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:55.183 [2024-12-16 12:32:02.142345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.228 ms 00:19:55.183 [2024-12-16 12:32:02.142351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.183 [2024-12-16 12:32:02.142429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.183 [2024-12-16 12:32:02.142437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:55.183 [2024-12-16 12:32:02.142444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:55.183 [2024-12-16 12:32:02.142450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.183 [2024-12-16 12:32:02.142469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.183 [2024-12-16 12:32:02.142476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:55.183 [2024-12-16 12:32:02.142482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:55.183 [2024-12-16 12:32:02.142488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.183 [2024-12-16 12:32:02.142507] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:55.183 [2024-12-16 12:32:02.145428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.183 [2024-12-16 12:32:02.145543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:55.183 [2024-12-16 12:32:02.145556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.924 ms 00:19:55.183 [2024-12-16 12:32:02.145562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.183 [2024-12-16 12:32:02.145596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.183 [2024-12-16 12:32:02.145604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:55.183 [2024-12-16 12:32:02.145610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:55.183 [2024-12-16 12:32:02.145617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.183 [2024-12-16 12:32:02.145634] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:55.183 [2024-12-16 12:32:02.145651] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:55.183 [2024-12-16 12:32:02.145680] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:55.183 [2024-12-16 12:32:02.145693] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:55.183 [2024-12-16 12:32:02.145775] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:55.183 [2024-12-16 12:32:02.145785] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:55.183 [2024-12-16 12:32:02.145793] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:55.183 [2024-12-16 12:32:02.145803] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:55.183 [2024-12-16 12:32:02.145810] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:55.183 [2024-12-16 12:32:02.145817] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:55.183 [2024-12-16 12:32:02.145823] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:55.183 [2024-12-16 12:32:02.145829] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:55.183 [2024-12-16 12:32:02.145835] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:55.183 [2024-12-16 12:32:02.145841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.183 [2024-12-16 12:32:02.145846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:55.183 [2024-12-16 12:32:02.145853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:19:55.183 [2024-12-16 12:32:02.145858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.183 [2024-12-16 12:32:02.145936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.183 [2024-12-16 12:32:02.145945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:55.183 [2024-12-16 12:32:02.145953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:55.183 [2024-12-16 12:32:02.145958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.183 [2024-12-16 12:32:02.146035] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:55.183 [2024-12-16 12:32:02.146043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:55.183 [2024-12-16 12:32:02.146050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:55.183 [2024-12-16 12:32:02.146056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.183 [2024-12-16 12:32:02.146062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:55.183 [2024-12-16 12:32:02.146068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:55.183 [2024-12-16 12:32:02.146074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:55.183 [2024-12-16 12:32:02.146080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:55.184 [2024-12-16 12:32:02.146087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:55.184 [2024-12-16 12:32:02.146092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:55.184 [2024-12-16 12:32:02.146098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:55.184 [2024-12-16 12:32:02.146110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:55.184 [2024-12-16 12:32:02.146115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:55.184 [2024-12-16 12:32:02.146120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:55.184 [2024-12-16 12:32:02.146126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:55.184 [2024-12-16 12:32:02.146132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.184 [2024-12-16 12:32:02.146137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:55.184 [2024-12-16 12:32:02.146143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:55.184 [2024-12-16 12:32:02.146147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.184 [2024-12-16 12:32:02.146153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:55.184 [2024-12-16 12:32:02.146173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:55.184 [2024-12-16 12:32:02.146179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.184 [2024-12-16 12:32:02.146184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:55.184 [2024-12-16 12:32:02.146190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:55.184 [2024-12-16 12:32:02.146195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.184 [2024-12-16 12:32:02.146201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:55.184 [2024-12-16 12:32:02.146206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:55.184 [2024-12-16 12:32:02.146212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.184 [2024-12-16 12:32:02.146218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:55.184 [2024-12-16 12:32:02.146223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:55.184 [2024-12-16 12:32:02.146229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.184 [2024-12-16 12:32:02.146234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:55.184 [2024-12-16 12:32:02.146239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:55.184 [2024-12-16 12:32:02.146244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:55.184 [2024-12-16 12:32:02.146249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:55.184 [2024-12-16 12:32:02.146255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:55.184 [2024-12-16 12:32:02.146260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:55.184 [2024-12-16 12:32:02.146265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:55.184 [2024-12-16 12:32:02.146270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:55.184 [2024-12-16 12:32:02.146275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.184 [2024-12-16 12:32:02.146281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:55.184 [2024-12-16 12:32:02.146288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:55.184 [2024-12-16 12:32:02.146293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.184 [2024-12-16 12:32:02.146300] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:55.184 [2024-12-16 12:32:02.146306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:55.184 [2024-12-16 12:32:02.146314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:55.184 [2024-12-16 12:32:02.146319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.184 [2024-12-16 12:32:02.146326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:55.184 [2024-12-16 12:32:02.146332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:55.184 [2024-12-16 12:32:02.146337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:55.184 [2024-12-16 12:32:02.146343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:55.184 [2024-12-16 12:32:02.146348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:55.184 [2024-12-16 12:32:02.146352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:55.184 [2024-12-16 12:32:02.146360] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:55.184 [2024-12-16 12:32:02.146367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:55.184 [2024-12-16 12:32:02.146374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:55.184 [2024-12-16 12:32:02.146380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:55.184 [2024-12-16 12:32:02.146385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:55.184 [2024-12-16 12:32:02.146391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:55.184 [2024-12-16 12:32:02.146396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:55.184 [2024-12-16 12:32:02.146402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:55.184 [2024-12-16 12:32:02.146407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:55.184 [2024-12-16 12:32:02.146413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:55.184 [2024-12-16 12:32:02.146418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:55.184 [2024-12-16 12:32:02.146424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:55.184 [2024-12-16 12:32:02.146429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:55.184 [2024-12-16 12:32:02.146434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:55.184 [2024-12-16 12:32:02.146440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:55.184 [2024-12-16 12:32:02.146447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:55.184 [2024-12-16 12:32:02.146452] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:55.184 [2024-12-16 12:32:02.146458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:55.184 [2024-12-16 12:32:02.146474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:55.184 [2024-12-16 12:32:02.146480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:55.184 [2024-12-16 12:32:02.146485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:55.184 [2024-12-16 12:32:02.146491] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:55.184 [2024-12-16 12:32:02.146497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.184 [2024-12-16 12:32:02.146505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:55.184 [2024-12-16 12:32:02.146511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:19:55.184 [2024-12-16 12:32:02.146517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.184 [2024-12-16 12:32:02.170886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.184 [2024-12-16 12:32:02.170915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:55.184 [2024-12-16 12:32:02.170923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.313 ms 00:19:55.184 [2024-12-16 12:32:02.170930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.184 [2024-12-16 12:32:02.171027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.184 [2024-12-16 12:32:02.171035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:55.184 [2024-12-16 12:32:02.171042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:55.184 [2024-12-16 12:32:02.171048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.184 [2024-12-16 12:32:02.208681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.184 [2024-12-16 12:32:02.208816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:55.184 [2024-12-16 12:32:02.208835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.615 ms 00:19:55.184 [2024-12-16 12:32:02.208842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.184 [2024-12-16 12:32:02.208904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.184 [2024-12-16 12:32:02.208912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:55.184 [2024-12-16 12:32:02.208920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:55.184 [2024-12-16 12:32:02.208925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.184 [2024-12-16 12:32:02.209339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.184 [2024-12-16 12:32:02.209359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:55.184 [2024-12-16 12:32:02.209367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:19:55.184 [2024-12-16 12:32:02.209379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.184 [2024-12-16 12:32:02.209505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.184 [2024-12-16 12:32:02.209513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:55.184 [2024-12-16 12:32:02.209519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:19:55.184 [2024-12-16 12:32:02.209525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.184 [2024-12-16 12:32:02.221836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.184 [2024-12-16 12:32:02.221862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:55.184 [2024-12-16 12:32:02.221870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.295 ms 00:19:55.184 [2024-12-16 12:32:02.221877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.184 [2024-12-16 12:32:02.232580] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:55.184 [2024-12-16 12:32:02.232608] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:55.185 [2024-12-16 12:32:02.232618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.185 [2024-12-16 12:32:02.232625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:55.185 [2024-12-16 12:32:02.232633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.643 ms 00:19:55.185 [2024-12-16 12:32:02.232639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.185 [2024-12-16 12:32:02.251524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.185 [2024-12-16 12:32:02.251643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:55.185 [2024-12-16 12:32:02.251656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.824 ms 00:19:55.185 [2024-12-16 12:32:02.251663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.185 [2024-12-16 12:32:02.260826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.185 [2024-12-16 12:32:02.260851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:55.185 [2024-12-16 12:32:02.260859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.109 ms 00:19:55.185 [2024-12-16 12:32:02.260864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.185 [2024-12-16 12:32:02.269866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.185 [2024-12-16 12:32:02.269889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:55.185 [2024-12-16 12:32:02.269897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.960 ms 00:19:55.185 [2024-12-16 12:32:02.269903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.185 [2024-12-16 12:32:02.270412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.185 [2024-12-16 12:32:02.270424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:55.185 [2024-12-16 12:32:02.270432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:19:55.185 [2024-12-16 12:32:02.270439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.446 [2024-12-16 12:32:02.318351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.446 [2024-12-16 12:32:02.318389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:55.446 [2024-12-16 12:32:02.318402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.891 ms 00:19:55.446 [2024-12-16 12:32:02.318409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.446 [2024-12-16 12:32:02.326764] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:55.446 [2024-12-16 12:32:02.341584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.446 [2024-12-16 12:32:02.341725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:55.446 [2024-12-16 12:32:02.341741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.104 ms 00:19:55.446 [2024-12-16 12:32:02.341752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.446 [2024-12-16 12:32:02.341847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.446 [2024-12-16 12:32:02.341856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:55.446 [2024-12-16 12:32:02.341864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:55.446 [2024-12-16 12:32:02.341870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.446 [2024-12-16 12:32:02.341917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.446 [2024-12-16 12:32:02.341925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:55.446 [2024-12-16 12:32:02.341932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:55.446 [2024-12-16 12:32:02.341941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.446 [2024-12-16 12:32:02.341965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.446 [2024-12-16 12:32:02.341972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:55.446 [2024-12-16 12:32:02.341978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:55.446 [2024-12-16 12:32:02.341984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.446 [2024-12-16 12:32:02.342014] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:55.446 [2024-12-16 12:32:02.342022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.446 [2024-12-16 12:32:02.342029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:55.446 [2024-12-16 12:32:02.342035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:55.446 [2024-12-16 12:32:02.342041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.446 [2024-12-16 12:32:02.361031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.446 [2024-12-16 12:32:02.361136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:55.446 [2024-12-16 12:32:02.361150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.975 ms 00:19:55.446 [2024-12-16 12:32:02.361168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.446 [2024-12-16 12:32:02.361240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.446 [2024-12-16 12:32:02.361249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:55.446 [2024-12-16 12:32:02.361257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:55.446 [2024-12-16 12:32:02.361263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.447 [2024-12-16 12:32:02.362045] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:55.447 [2024-12-16 12:32:02.364422] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 240.941 ms, result 0 00:19:55.447 [2024-12-16 12:32:02.365539] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:55.447 [2024-12-16 12:32:02.376366] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:56.390  [2024-12-16T12:32:04.469Z] Copying: 15/256 [MB] (15 MBps) [2024-12-16T12:32:05.414Z] Copying: 26/256 [MB] (11 MBps) [2024-12-16T12:32:06.803Z] Copying: 38/256 [MB] (11 MBps) [2024-12-16T12:32:07.746Z] Copying: 50/256 [MB] (12 MBps) [2024-12-16T12:32:08.689Z] Copying: 62/256 [MB] (12 MBps) [2024-12-16T12:32:09.632Z] Copying: 73/256 [MB] (10 MBps) [2024-12-16T12:32:10.575Z] Copying: 85/256 [MB] (11 MBps) [2024-12-16T12:32:11.521Z] Copying: 97/256 [MB] (12 MBps) [2024-12-16T12:32:12.465Z] Copying: 108/256 [MB] (10 MBps) [2024-12-16T12:32:13.408Z] Copying: 120816/262144 [kB] (10144 kBps) [2024-12-16T12:32:14.795Z] Copying: 129/256 [MB] (11 MBps) [2024-12-16T12:32:15.739Z] Copying: 140/256 [MB] (10 MBps) [2024-12-16T12:32:16.683Z] Copying: 150/256 [MB] (10 MBps) [2024-12-16T12:32:17.624Z] Copying: 161/256 [MB] (11 MBps) [2024-12-16T12:32:18.567Z] Copying: 172/256 [MB] (10 MBps) [2024-12-16T12:32:19.510Z] Copying: 183/256 [MB] (11 MBps) [2024-12-16T12:32:20.454Z] Copying: 195/256 [MB] (11 MBps) [2024-12-16T12:32:21.399Z] Copying: 206/256 [MB] (11 MBps) [2024-12-16T12:32:22.786Z] Copying: 217/256 [MB] (11 MBps) [2024-12-16T12:32:23.730Z] Copying: 228/256 [MB] (10 MBps) [2024-12-16T12:32:24.671Z] Copying: 238/256 [MB] (10 MBps) [2024-12-16T12:32:24.931Z] Copying: 250/256 [MB] (11 MBps) [2024-12-16T12:32:24.931Z] Copying: 256/256 [MB] (average 11 MBps)[2024-12-16 12:32:24.887686] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:17.825 [2024-12-16 12:32:24.895388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.825 [2024-12-16 12:32:24.895535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:17.825 [2024-12-16 12:32:24.895558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:17.825 [2024-12-16 12:32:24.895566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.825 [2024-12-16 12:32:24.895588] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:17.825 [2024-12-16 12:32:24.897780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.825 [2024-12-16 12:32:24.897804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:17.825 [2024-12-16 12:32:24.897812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.181 ms 00:20:17.825 [2024-12-16 12:32:24.897819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.825 [2024-12-16 12:32:24.898026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.825 [2024-12-16 12:32:24.898034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:17.825 [2024-12-16 12:32:24.898041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:20:17.825 [2024-12-16 12:32:24.898047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.825 [2024-12-16 12:32:24.900839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.825 [2024-12-16 12:32:24.900855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:17.825 [2024-12-16 12:32:24.900863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.777 ms 00:20:17.825 [2024-12-16 12:32:24.900870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.825 [2024-12-16 12:32:24.906062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.825 [2024-12-16 12:32:24.906173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:17.825 [2024-12-16 12:32:24.906187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.179 ms 00:20:17.825 [2024-12-16 12:32:24.906194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.825 [2024-12-16 12:32:24.925164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.825 [2024-12-16 12:32:24.925271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:17.825 [2024-12-16 12:32:24.925284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.925 ms 00:20:17.825 [2024-12-16 12:32:24.925291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.089 [2024-12-16 12:32:24.937733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.089 [2024-12-16 12:32:24.937759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:18.089 [2024-12-16 12:32:24.937772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.415 ms 00:20:18.089 [2024-12-16 12:32:24.937779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.089 [2024-12-16 12:32:24.937882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.089 [2024-12-16 12:32:24.937891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:18.089 [2024-12-16 12:32:24.937904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:18.089 [2024-12-16 12:32:24.937910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.089 [2024-12-16 12:32:24.956599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.089 [2024-12-16 12:32:24.956704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:18.090 [2024-12-16 12:32:24.956716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.677 ms 00:20:18.090 [2024-12-16 12:32:24.956722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.090 [2024-12-16 12:32:24.974980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.090 [2024-12-16 12:32:24.975004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:18.090 [2024-12-16 12:32:24.975012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.233 ms 00:20:18.090 [2024-12-16 12:32:24.975018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.090 [2024-12-16 12:32:24.992537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.090 [2024-12-16 12:32:24.992561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:18.090 [2024-12-16 12:32:24.992569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.492 ms 00:20:18.090 [2024-12-16 12:32:24.992574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.090 [2024-12-16 12:32:25.010459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.090 [2024-12-16 12:32:25.010559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:18.090 [2024-12-16 12:32:25.010570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.837 ms 00:20:18.090 [2024-12-16 12:32:25.010576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.090 [2024-12-16 12:32:25.010599] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:18.090 [2024-12-16 12:32:25.010611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:18.090 [2024-12-16 12:32:25.010937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.010942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.010948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.010954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.010959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.010965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.010972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.010978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.010984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.010990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.010995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:18.091 [2024-12-16 12:32:25.011217] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:18.091 [2024-12-16 12:32:25.011223] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f50ddf9-d4f2-4e3f-9574-029fb265c97a 00:20:18.091 [2024-12-16 12:32:25.011230] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:18.091 [2024-12-16 12:32:25.011235] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:18.091 [2024-12-16 12:32:25.011241] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:18.091 [2024-12-16 12:32:25.011247] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:18.091 [2024-12-16 12:32:25.011252] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:18.091 [2024-12-16 12:32:25.011258] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:18.091 [2024-12-16 12:32:25.011266] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:18.091 [2024-12-16 12:32:25.011271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:18.091 [2024-12-16 12:32:25.011276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:18.091 [2024-12-16 12:32:25.011282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.091 [2024-12-16 12:32:25.011290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:18.091 [2024-12-16 12:32:25.011297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:20:18.091 [2024-12-16 12:32:25.011303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.091 [2024-12-16 12:32:25.021370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.091 [2024-12-16 12:32:25.021478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:18.091 [2024-12-16 12:32:25.021489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.054 ms 00:20:18.091 [2024-12-16 12:32:25.021496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.091 [2024-12-16 12:32:25.021803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.091 [2024-12-16 12:32:25.021811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:18.091 [2024-12-16 12:32:25.021818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:20:18.091 [2024-12-16 12:32:25.021823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.091 [2024-12-16 12:32:25.051180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.091 [2024-12-16 12:32:25.051207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:18.091 [2024-12-16 12:32:25.051215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.091 [2024-12-16 12:32:25.051226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.091 [2024-12-16 12:32:25.051301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.092 [2024-12-16 12:32:25.051308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:18.092 [2024-12-16 12:32:25.051315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.092 [2024-12-16 12:32:25.051321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.092 [2024-12-16 12:32:25.051360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.092 [2024-12-16 12:32:25.051368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:18.092 [2024-12-16 12:32:25.051374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.092 [2024-12-16 12:32:25.051380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.092 [2024-12-16 12:32:25.051396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.092 [2024-12-16 12:32:25.051404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:18.092 [2024-12-16 12:32:25.051410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.092 [2024-12-16 12:32:25.051415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.092 [2024-12-16 12:32:25.114562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.092 [2024-12-16 12:32:25.114599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:18.092 [2024-12-16 12:32:25.114608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.092 [2024-12-16 12:32:25.114615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.092 [2024-12-16 12:32:25.166632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.092 [2024-12-16 12:32:25.166668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:18.092 [2024-12-16 12:32:25.166678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.092 [2024-12-16 12:32:25.166684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.092 [2024-12-16 12:32:25.166743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.092 [2024-12-16 12:32:25.166751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:18.092 [2024-12-16 12:32:25.166758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.092 [2024-12-16 12:32:25.166764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.092 [2024-12-16 12:32:25.166788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.092 [2024-12-16 12:32:25.166800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:18.092 [2024-12-16 12:32:25.166806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.092 [2024-12-16 12:32:25.166813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.092 [2024-12-16 12:32:25.166890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.092 [2024-12-16 12:32:25.166898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:18.092 [2024-12-16 12:32:25.166905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.092 [2024-12-16 12:32:25.166911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.092 [2024-12-16 12:32:25.166939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.092 [2024-12-16 12:32:25.166947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:18.092 [2024-12-16 12:32:25.166956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.092 [2024-12-16 12:32:25.166963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.092 [2024-12-16 12:32:25.167000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.092 [2024-12-16 12:32:25.167007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:18.092 [2024-12-16 12:32:25.167013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.092 [2024-12-16 12:32:25.167020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.092 [2024-12-16 12:32:25.167061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.092 [2024-12-16 12:32:25.167077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:18.092 [2024-12-16 12:32:25.167083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.092 [2024-12-16 12:32:25.167090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.092 [2024-12-16 12:32:25.167232] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 271.829 ms, result 0 00:20:18.663 00:20:18.663 00:20:18.663 12:32:25 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:18.663 12:32:25 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:19.236 12:32:26 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:19.497 [2024-12-16 12:32:26.392779] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:19.497 [2024-12-16 12:32:26.393169] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78721 ] 00:20:19.497 [2024-12-16 12:32:26.552206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.758 [2024-12-16 12:32:26.641305] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.019 [2024-12-16 12:32:26.874917] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:20.019 [2024-12-16 12:32:26.874980] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:20.019 [2024-12-16 12:32:27.031586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.019 [2024-12-16 12:32:27.031624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:20.019 [2024-12-16 12:32:27.031637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:20.019 [2024-12-16 12:32:27.031643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.019 [2024-12-16 12:32:27.034028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.019 [2024-12-16 12:32:27.034135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:20.019 [2024-12-16 12:32:27.034196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.373 ms 00:20:20.019 [2024-12-16 12:32:27.034216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.019 [2024-12-16 12:32:27.034542] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:20.019 [2024-12-16 12:32:27.035195] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:20.019 [2024-12-16 12:32:27.035304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.019 [2024-12-16 12:32:27.035351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:20.019 [2024-12-16 12:32:27.035370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:20:20.019 [2024-12-16 12:32:27.035385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.019 [2024-12-16 12:32:27.036813] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:20.019 [2024-12-16 12:32:27.047047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.019 [2024-12-16 12:32:27.047143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:20.019 [2024-12-16 12:32:27.047167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.235 ms 00:20:20.019 [2024-12-16 12:32:27.047175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.019 [2024-12-16 12:32:27.047252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.019 [2024-12-16 12:32:27.047262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:20.019 [2024-12-16 12:32:27.047269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:20.019 [2024-12-16 12:32:27.047275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.019 [2024-12-16 12:32:27.053532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.019 [2024-12-16 12:32:27.053628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:20.019 [2024-12-16 12:32:27.053640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.224 ms 00:20:20.019 [2024-12-16 12:32:27.053647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.019 [2024-12-16 12:32:27.053721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.019 [2024-12-16 12:32:27.053729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:20.019 [2024-12-16 12:32:27.053736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:20.019 [2024-12-16 12:32:27.053742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.019 [2024-12-16 12:32:27.053762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.019 [2024-12-16 12:32:27.053769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:20.019 [2024-12-16 12:32:27.053775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:20.019 [2024-12-16 12:32:27.053781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.019 [2024-12-16 12:32:27.053801] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:20.019 [2024-12-16 12:32:27.056921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.019 [2024-12-16 12:32:27.057016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:20.019 [2024-12-16 12:32:27.057027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.125 ms 00:20:20.019 [2024-12-16 12:32:27.057033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.019 [2024-12-16 12:32:27.057074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.019 [2024-12-16 12:32:27.057081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:20.019 [2024-12-16 12:32:27.057087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:20.019 [2024-12-16 12:32:27.057093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.019 [2024-12-16 12:32:27.057110] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:20.019 [2024-12-16 12:32:27.057127] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:20.019 [2024-12-16 12:32:27.057169] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:20.019 [2024-12-16 12:32:27.057183] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:20.019 [2024-12-16 12:32:27.057273] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:20.019 [2024-12-16 12:32:27.057282] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:20.019 [2024-12-16 12:32:27.057296] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:20.019 [2024-12-16 12:32:27.057306] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:20.019 [2024-12-16 12:32:27.057313] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:20.019 [2024-12-16 12:32:27.057319] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:20.019 [2024-12-16 12:32:27.057325] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:20.019 [2024-12-16 12:32:27.057331] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:20.019 [2024-12-16 12:32:27.057337] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:20.019 [2024-12-16 12:32:27.057343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.019 [2024-12-16 12:32:27.057349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:20.019 [2024-12-16 12:32:27.057355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:20:20.019 [2024-12-16 12:32:27.057361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.019 [2024-12-16 12:32:27.057446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.019 [2024-12-16 12:32:27.057457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:20.019 [2024-12-16 12:32:27.057464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:20.019 [2024-12-16 12:32:27.057470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.019 [2024-12-16 12:32:27.057548] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:20.019 [2024-12-16 12:32:27.057557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:20.019 [2024-12-16 12:32:27.057564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:20.019 [2024-12-16 12:32:27.057571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.019 [2024-12-16 12:32:27.057577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:20.019 [2024-12-16 12:32:27.057582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:20.019 [2024-12-16 12:32:27.057588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:20.019 [2024-12-16 12:32:27.057594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:20.019 [2024-12-16 12:32:27.057599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:20.019 [2024-12-16 12:32:27.057606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:20.019 [2024-12-16 12:32:27.057611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:20.019 [2024-12-16 12:32:27.057625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:20.019 [2024-12-16 12:32:27.057630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:20.019 [2024-12-16 12:32:27.057636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:20.019 [2024-12-16 12:32:27.057642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:20.019 [2024-12-16 12:32:27.057647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.019 [2024-12-16 12:32:27.057652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:20.019 [2024-12-16 12:32:27.057657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:20.019 [2024-12-16 12:32:27.057662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.019 [2024-12-16 12:32:27.057668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:20.019 [2024-12-16 12:32:27.057673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:20.019 [2024-12-16 12:32:27.057680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:20.019 [2024-12-16 12:32:27.057685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:20.019 [2024-12-16 12:32:27.057690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:20.019 [2024-12-16 12:32:27.057695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:20.019 [2024-12-16 12:32:27.057701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:20.019 [2024-12-16 12:32:27.057707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:20.019 [2024-12-16 12:32:27.057712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:20.019 [2024-12-16 12:32:27.057717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:20.019 [2024-12-16 12:32:27.057722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:20.020 [2024-12-16 12:32:27.057728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:20.020 [2024-12-16 12:32:27.057733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:20.020 [2024-12-16 12:32:27.057738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:20.020 [2024-12-16 12:32:27.057744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:20.020 [2024-12-16 12:32:27.057749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:20.020 [2024-12-16 12:32:27.057754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:20.020 [2024-12-16 12:32:27.057759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:20.020 [2024-12-16 12:32:27.057765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:20.020 [2024-12-16 12:32:27.057770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:20.020 [2024-12-16 12:32:27.057775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.020 [2024-12-16 12:32:27.057787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:20.020 [2024-12-16 12:32:27.057793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:20.020 [2024-12-16 12:32:27.057799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.020 [2024-12-16 12:32:27.057804] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:20.020 [2024-12-16 12:32:27.057810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:20.020 [2024-12-16 12:32:27.057818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:20.020 [2024-12-16 12:32:27.057824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.020 [2024-12-16 12:32:27.057830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:20.020 [2024-12-16 12:32:27.057835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:20.020 [2024-12-16 12:32:27.057840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:20.020 [2024-12-16 12:32:27.057847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:20.020 [2024-12-16 12:32:27.057853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:20.020 [2024-12-16 12:32:27.057858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:20.020 [2024-12-16 12:32:27.057865] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:20.020 [2024-12-16 12:32:27.057872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:20.020 [2024-12-16 12:32:27.057879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:20.020 [2024-12-16 12:32:27.057885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:20.020 [2024-12-16 12:32:27.057892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:20.020 [2024-12-16 12:32:27.057897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:20.020 [2024-12-16 12:32:27.057904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:20.020 [2024-12-16 12:32:27.057909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:20.020 [2024-12-16 12:32:27.057915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:20.020 [2024-12-16 12:32:27.057920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:20.020 [2024-12-16 12:32:27.057925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:20.020 [2024-12-16 12:32:27.057931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:20.020 [2024-12-16 12:32:27.057936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:20.020 [2024-12-16 12:32:27.057942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:20.020 [2024-12-16 12:32:27.057947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:20.020 [2024-12-16 12:32:27.057953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:20.020 [2024-12-16 12:32:27.057959] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:20.020 [2024-12-16 12:32:27.057966] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:20.020 [2024-12-16 12:32:27.057971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:20.020 [2024-12-16 12:32:27.057977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:20.020 [2024-12-16 12:32:27.057982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:20.020 [2024-12-16 12:32:27.057988] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:20.020 [2024-12-16 12:32:27.057994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.020 [2024-12-16 12:32:27.058004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:20.020 [2024-12-16 12:32:27.058011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.499 ms 00:20:20.020 [2024-12-16 12:32:27.058016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.020 [2024-12-16 12:32:27.082370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.020 [2024-12-16 12:32:27.082397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:20.020 [2024-12-16 12:32:27.082406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.299 ms 00:20:20.020 [2024-12-16 12:32:27.082413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.020 [2024-12-16 12:32:27.082508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.020 [2024-12-16 12:32:27.082516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:20.020 [2024-12-16 12:32:27.082523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:20.020 [2024-12-16 12:32:27.082529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.281 [2024-12-16 12:32:27.124037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.281 [2024-12-16 12:32:27.124068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:20.281 [2024-12-16 12:32:27.124080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.490 ms 00:20:20.281 [2024-12-16 12:32:27.124087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.281 [2024-12-16 12:32:27.124180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.281 [2024-12-16 12:32:27.124190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:20.281 [2024-12-16 12:32:27.124197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:20.281 [2024-12-16 12:32:27.124203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.281 [2024-12-16 12:32:27.124593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.281 [2024-12-16 12:32:27.124606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:20.281 [2024-12-16 12:32:27.124614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:20:20.281 [2024-12-16 12:32:27.124623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.281 [2024-12-16 12:32:27.124736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.281 [2024-12-16 12:32:27.124752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:20.281 [2024-12-16 12:32:27.124759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:20.281 [2024-12-16 12:32:27.124765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.281 [2024-12-16 12:32:27.137099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.282 [2024-12-16 12:32:27.137127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:20.282 [2024-12-16 12:32:27.137135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.316 ms 00:20:20.282 [2024-12-16 12:32:27.137141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.282 [2024-12-16 12:32:27.147883] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:20.282 [2024-12-16 12:32:27.147911] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:20.282 [2024-12-16 12:32:27.147921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.282 [2024-12-16 12:32:27.147928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:20.282 [2024-12-16 12:32:27.147935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.681 ms 00:20:20.282 [2024-12-16 12:32:27.147942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.282 [2024-12-16 12:32:27.166842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.282 [2024-12-16 12:32:27.166869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:20.282 [2024-12-16 12:32:27.166879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.844 ms 00:20:20.282 [2024-12-16 12:32:27.166886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.282 [2024-12-16 12:32:27.176116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.282 [2024-12-16 12:32:27.176141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:20.282 [2024-12-16 12:32:27.176148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.173 ms 00:20:20.282 [2024-12-16 12:32:27.176166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.282 [2024-12-16 12:32:27.185229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.282 [2024-12-16 12:32:27.185254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:20.282 [2024-12-16 12:32:27.185261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.020 ms 00:20:20.282 [2024-12-16 12:32:27.185267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.282 [2024-12-16 12:32:27.185745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.282 [2024-12-16 12:32:27.185762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:20.282 [2024-12-16 12:32:27.185769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:20:20.282 [2024-12-16 12:32:27.185775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.282 [2024-12-16 12:32:27.234647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.282 [2024-12-16 12:32:27.234802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:20.282 [2024-12-16 12:32:27.234818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.853 ms 00:20:20.282 [2024-12-16 12:32:27.234826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.282 [2024-12-16 12:32:27.243300] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:20.282 [2024-12-16 12:32:27.257867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.282 [2024-12-16 12:32:27.257988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:20.282 [2024-12-16 12:32:27.258002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.970 ms 00:20:20.282 [2024-12-16 12:32:27.258014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.282 [2024-12-16 12:32:27.258090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.282 [2024-12-16 12:32:27.258100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:20.282 [2024-12-16 12:32:27.258106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:20.282 [2024-12-16 12:32:27.258113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.282 [2024-12-16 12:32:27.258175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.282 [2024-12-16 12:32:27.258184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:20.282 [2024-12-16 12:32:27.258191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:20.282 [2024-12-16 12:32:27.258200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.282 [2024-12-16 12:32:27.258228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.282 [2024-12-16 12:32:27.258235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:20.282 [2024-12-16 12:32:27.258242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:20.282 [2024-12-16 12:32:27.258248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.282 [2024-12-16 12:32:27.258277] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:20.282 [2024-12-16 12:32:27.258285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.282 [2024-12-16 12:32:27.258291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:20.282 [2024-12-16 12:32:27.258297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:20.282 [2024-12-16 12:32:27.258304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.282 [2024-12-16 12:32:27.277412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.282 [2024-12-16 12:32:27.277443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:20.282 [2024-12-16 12:32:27.277452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.090 ms 00:20:20.282 [2024-12-16 12:32:27.277459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.282 [2024-12-16 12:32:27.277531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.282 [2024-12-16 12:32:27.277540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:20.282 [2024-12-16 12:32:27.277547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:20.282 [2024-12-16 12:32:27.277554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.282 [2024-12-16 12:32:27.278383] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:20.282 [2024-12-16 12:32:27.280794] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 246.529 ms, result 0 00:20:20.282 [2024-12-16 12:32:27.282068] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:20.282 [2024-12-16 12:32:27.293111] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:20.856  [2024-12-16T12:32:27.962Z] Copying: 4096/4096 [kB] (average 11 MBps)[2024-12-16 12:32:27.653734] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:20.856 [2024-12-16 12:32:27.660526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.856 [2024-12-16 12:32:27.660556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:20.856 [2024-12-16 12:32:27.660569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:20.856 [2024-12-16 12:32:27.660576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.856 [2024-12-16 12:32:27.660593] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:20.856 [2024-12-16 12:32:27.662828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.856 [2024-12-16 12:32:27.662852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:20.856 [2024-12-16 12:32:27.662860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.226 ms 00:20:20.856 [2024-12-16 12:32:27.662867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.856 [2024-12-16 12:32:27.665146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.856 [2024-12-16 12:32:27.665181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:20.856 [2024-12-16 12:32:27.665189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.252 ms 00:20:20.856 [2024-12-16 12:32:27.665195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.856 [2024-12-16 12:32:27.668646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.856 [2024-12-16 12:32:27.668669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:20.856 [2024-12-16 12:32:27.668678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.436 ms 00:20:20.856 [2024-12-16 12:32:27.668685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.856 [2024-12-16 12:32:27.673851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.856 [2024-12-16 12:32:27.673873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:20.856 [2024-12-16 12:32:27.673880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.145 ms 00:20:20.856 [2024-12-16 12:32:27.673886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.856 [2024-12-16 12:32:27.692303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.856 [2024-12-16 12:32:27.692328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:20.856 [2024-12-16 12:32:27.692336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.377 ms 00:20:20.856 [2024-12-16 12:32:27.692342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.856 [2024-12-16 12:32:27.704521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.856 [2024-12-16 12:32:27.704549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:20.856 [2024-12-16 12:32:27.704559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.151 ms 00:20:20.856 [2024-12-16 12:32:27.704566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.856 [2024-12-16 12:32:27.704665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.856 [2024-12-16 12:32:27.704673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:20.856 [2024-12-16 12:32:27.704686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:20.856 [2024-12-16 12:32:27.704692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.856 [2024-12-16 12:32:27.723657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.856 [2024-12-16 12:32:27.723681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:20.856 [2024-12-16 12:32:27.723689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.952 ms 00:20:20.856 [2024-12-16 12:32:27.723695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.856 [2024-12-16 12:32:27.742202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.856 [2024-12-16 12:32:27.742228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:20.856 [2024-12-16 12:32:27.742236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.480 ms 00:20:20.856 [2024-12-16 12:32:27.742241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.856 [2024-12-16 12:32:27.759927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.856 [2024-12-16 12:32:27.760075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:20.856 [2024-12-16 12:32:27.760088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.657 ms 00:20:20.856 [2024-12-16 12:32:27.760093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.856 [2024-12-16 12:32:27.778037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.856 [2024-12-16 12:32:27.778062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:20.856 [2024-12-16 12:32:27.778070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.898 ms 00:20:20.857 [2024-12-16 12:32:27.778075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.857 [2024-12-16 12:32:27.778102] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:20.857 [2024-12-16 12:32:27.778114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:20.857 [2024-12-16 12:32:27.778669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:20.858 [2024-12-16 12:32:27.778675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:20.858 [2024-12-16 12:32:27.778680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:20.858 [2024-12-16 12:32:27.778686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:20.858 [2024-12-16 12:32:27.778692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:20.858 [2024-12-16 12:32:27.778697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:20.858 [2024-12-16 12:32:27.778703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:20.858 [2024-12-16 12:32:27.778716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:20.858 [2024-12-16 12:32:27.778721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:20.858 [2024-12-16 12:32:27.778727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:20.858 [2024-12-16 12:32:27.778733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:20.858 [2024-12-16 12:32:27.778739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:20.858 [2024-12-16 12:32:27.778745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:20.858 [2024-12-16 12:32:27.778751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:20.858 [2024-12-16 12:32:27.778762] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:20.858 [2024-12-16 12:32:27.778768] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f50ddf9-d4f2-4e3f-9574-029fb265c97a 00:20:20.858 [2024-12-16 12:32:27.778774] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:20.858 [2024-12-16 12:32:27.778780] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:20.858 [2024-12-16 12:32:27.778786] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:20.858 [2024-12-16 12:32:27.778793] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:20.858 [2024-12-16 12:32:27.778799] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:20.858 [2024-12-16 12:32:27.778805] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:20.858 [2024-12-16 12:32:27.778812] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:20.858 [2024-12-16 12:32:27.778817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:20.858 [2024-12-16 12:32:27.778821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:20.858 [2024-12-16 12:32:27.778827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.858 [2024-12-16 12:32:27.778833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:20.858 [2024-12-16 12:32:27.778839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:20:20.858 [2024-12-16 12:32:27.778844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.858 [2024-12-16 12:32:27.788975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.858 [2024-12-16 12:32:27.788998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:20.858 [2024-12-16 12:32:27.789006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.116 ms 00:20:20.858 [2024-12-16 12:32:27.789013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.858 [2024-12-16 12:32:27.789342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.858 [2024-12-16 12:32:27.789351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:20.858 [2024-12-16 12:32:27.789358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:20:20.858 [2024-12-16 12:32:27.789363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.858 [2024-12-16 12:32:27.818681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.858 [2024-12-16 12:32:27.818707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:20.858 [2024-12-16 12:32:27.818715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.858 [2024-12-16 12:32:27.818725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.858 [2024-12-16 12:32:27.818777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.858 [2024-12-16 12:32:27.818784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:20.858 [2024-12-16 12:32:27.818795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.858 [2024-12-16 12:32:27.818801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.858 [2024-12-16 12:32:27.818834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.858 [2024-12-16 12:32:27.818841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:20.858 [2024-12-16 12:32:27.818847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.858 [2024-12-16 12:32:27.818853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.858 [2024-12-16 12:32:27.818869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.858 [2024-12-16 12:32:27.818875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:20.858 [2024-12-16 12:32:27.818881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.858 [2024-12-16 12:32:27.818887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.858 [2024-12-16 12:32:27.882816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.858 [2024-12-16 12:32:27.882851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:20.858 [2024-12-16 12:32:27.882861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.858 [2024-12-16 12:32:27.882868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.858 [2024-12-16 12:32:27.934307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.858 [2024-12-16 12:32:27.934342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:20.858 [2024-12-16 12:32:27.934353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.858 [2024-12-16 12:32:27.934360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.858 [2024-12-16 12:32:27.934406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.858 [2024-12-16 12:32:27.934414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:20.858 [2024-12-16 12:32:27.934421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.858 [2024-12-16 12:32:27.934428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.858 [2024-12-16 12:32:27.934453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.858 [2024-12-16 12:32:27.934464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:20.858 [2024-12-16 12:32:27.934471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.858 [2024-12-16 12:32:27.934477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.858 [2024-12-16 12:32:27.934554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.858 [2024-12-16 12:32:27.934563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:20.858 [2024-12-16 12:32:27.934571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.858 [2024-12-16 12:32:27.934577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.858 [2024-12-16 12:32:27.934604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.858 [2024-12-16 12:32:27.934611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:20.858 [2024-12-16 12:32:27.934619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.858 [2024-12-16 12:32:27.934625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.858 [2024-12-16 12:32:27.934660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.858 [2024-12-16 12:32:27.934667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:20.858 [2024-12-16 12:32:27.934673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.858 [2024-12-16 12:32:27.934680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.858 [2024-12-16 12:32:27.934721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.858 [2024-12-16 12:32:27.934731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:20.858 [2024-12-16 12:32:27.934737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.858 [2024-12-16 12:32:27.934743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.858 [2024-12-16 12:32:27.934872] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 274.328 ms, result 0 00:20:21.431 00:20:21.431 00:20:21.431 12:32:28 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78745 00:20:21.431 12:32:28 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78745 00:20:21.431 12:32:28 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78745 ']' 00:20:21.431 12:32:28 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:21.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.431 12:32:28 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.431 12:32:28 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.431 12:32:28 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.431 12:32:28 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.431 12:32:28 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:21.692 [2024-12-16 12:32:28.591623] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:21.692 [2024-12-16 12:32:28.591723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78745 ] 00:20:21.692 [2024-12-16 12:32:28.743715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.952 [2024-12-16 12:32:28.834782] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.524 12:32:29 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.524 12:32:29 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:22.524 12:32:29 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:22.524 [2024-12-16 12:32:29.627564] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:22.524 [2024-12-16 12:32:29.627758] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:22.787 [2024-12-16 12:32:29.800405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.787 [2024-12-16 12:32:29.800442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:22.787 [2024-12-16 12:32:29.800455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:22.787 [2024-12-16 12:32:29.800461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.787 [2024-12-16 12:32:29.803177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.787 [2024-12-16 12:32:29.803205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:22.787 [2024-12-16 12:32:29.803215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.698 ms 00:20:22.787 [2024-12-16 12:32:29.803221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.787 [2024-12-16 12:32:29.803289] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:22.787 [2024-12-16 12:32:29.803811] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:22.787 [2024-12-16 12:32:29.803829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.787 [2024-12-16 12:32:29.803837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:22.787 [2024-12-16 12:32:29.803846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:20:22.787 [2024-12-16 12:32:29.803852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.787 [2024-12-16 12:32:29.805335] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:22.787 [2024-12-16 12:32:29.815879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.787 [2024-12-16 12:32:29.815910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:22.787 [2024-12-16 12:32:29.815919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.547 ms 00:20:22.787 [2024-12-16 12:32:29.815932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.787 [2024-12-16 12:32:29.816022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.787 [2024-12-16 12:32:29.816037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:22.787 [2024-12-16 12:32:29.816044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:22.787 [2024-12-16 12:32:29.816056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.787 [2024-12-16 12:32:29.822450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.787 [2024-12-16 12:32:29.822597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:22.787 [2024-12-16 12:32:29.822609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.345 ms 00:20:22.787 [2024-12-16 12:32:29.822617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.787 [2024-12-16 12:32:29.822696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.787 [2024-12-16 12:32:29.822705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:22.787 [2024-12-16 12:32:29.822712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:22.787 [2024-12-16 12:32:29.822723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.787 [2024-12-16 12:32:29.822743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.787 [2024-12-16 12:32:29.822751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:22.787 [2024-12-16 12:32:29.822757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:22.787 [2024-12-16 12:32:29.822764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.787 [2024-12-16 12:32:29.822783] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:22.787 [2024-12-16 12:32:29.825702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.787 [2024-12-16 12:32:29.825802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:22.787 [2024-12-16 12:32:29.825818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.922 ms 00:20:22.787 [2024-12-16 12:32:29.825824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.787 [2024-12-16 12:32:29.825859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.787 [2024-12-16 12:32:29.825866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:22.787 [2024-12-16 12:32:29.825874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:22.787 [2024-12-16 12:32:29.825881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.787 [2024-12-16 12:32:29.825899] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:22.787 [2024-12-16 12:32:29.825917] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:22.787 [2024-12-16 12:32:29.825953] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:22.787 [2024-12-16 12:32:29.825965] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:22.787 [2024-12-16 12:32:29.826051] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:22.787 [2024-12-16 12:32:29.826060] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:22.787 [2024-12-16 12:32:29.826073] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:22.787 [2024-12-16 12:32:29.826081] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:22.787 [2024-12-16 12:32:29.826089] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:22.787 [2024-12-16 12:32:29.826096] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:22.787 [2024-12-16 12:32:29.826103] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:22.787 [2024-12-16 12:32:29.826108] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:22.787 [2024-12-16 12:32:29.826117] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:22.787 [2024-12-16 12:32:29.826124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.787 [2024-12-16 12:32:29.826131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:22.787 [2024-12-16 12:32:29.826137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:20:22.787 [2024-12-16 12:32:29.826144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.787 [2024-12-16 12:32:29.826236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.787 [2024-12-16 12:32:29.826247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:22.787 [2024-12-16 12:32:29.826252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:22.787 [2024-12-16 12:32:29.826260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.787 [2024-12-16 12:32:29.826339] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:22.787 [2024-12-16 12:32:29.826349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:22.787 [2024-12-16 12:32:29.826355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:22.787 [2024-12-16 12:32:29.826363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.787 [2024-12-16 12:32:29.826370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:22.787 [2024-12-16 12:32:29.826379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:22.787 [2024-12-16 12:32:29.826384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:22.787 [2024-12-16 12:32:29.826392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:22.787 [2024-12-16 12:32:29.826399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:22.787 [2024-12-16 12:32:29.826408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:22.787 [2024-12-16 12:32:29.826415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:22.787 [2024-12-16 12:32:29.826422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:22.787 [2024-12-16 12:32:29.826427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:22.787 [2024-12-16 12:32:29.826433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:22.787 [2024-12-16 12:32:29.826439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:22.787 [2024-12-16 12:32:29.826445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.787 [2024-12-16 12:32:29.826450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:22.787 [2024-12-16 12:32:29.826457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:22.787 [2024-12-16 12:32:29.826467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.787 [2024-12-16 12:32:29.826473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:22.787 [2024-12-16 12:32:29.826479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:22.787 [2024-12-16 12:32:29.826486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:22.787 [2024-12-16 12:32:29.826491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:22.787 [2024-12-16 12:32:29.826499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:22.787 [2024-12-16 12:32:29.826504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:22.787 [2024-12-16 12:32:29.826510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:22.787 [2024-12-16 12:32:29.826515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:22.787 [2024-12-16 12:32:29.826522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:22.787 [2024-12-16 12:32:29.826527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:22.787 [2024-12-16 12:32:29.826534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:22.787 [2024-12-16 12:32:29.826539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:22.787 [2024-12-16 12:32:29.826546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:22.787 [2024-12-16 12:32:29.826552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:22.787 [2024-12-16 12:32:29.826559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:22.787 [2024-12-16 12:32:29.826564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:22.787 [2024-12-16 12:32:29.826571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:22.787 [2024-12-16 12:32:29.826575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:22.787 [2024-12-16 12:32:29.826582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:22.787 [2024-12-16 12:32:29.826587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:22.787 [2024-12-16 12:32:29.826594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.787 [2024-12-16 12:32:29.826600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:22.788 [2024-12-16 12:32:29.826607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:22.788 [2024-12-16 12:32:29.826612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.788 [2024-12-16 12:32:29.826619] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:22.788 [2024-12-16 12:32:29.826626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:22.788 [2024-12-16 12:32:29.826633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:22.788 [2024-12-16 12:32:29.826639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.788 [2024-12-16 12:32:29.826646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:22.788 [2024-12-16 12:32:29.826651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:22.788 [2024-12-16 12:32:29.826657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:22.788 [2024-12-16 12:32:29.826663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:22.788 [2024-12-16 12:32:29.826669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:22.788 [2024-12-16 12:32:29.826674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:22.788 [2024-12-16 12:32:29.826682] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:22.788 [2024-12-16 12:32:29.826690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:22.788 [2024-12-16 12:32:29.826702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:22.788 [2024-12-16 12:32:29.826707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:22.788 [2024-12-16 12:32:29.826715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:22.788 [2024-12-16 12:32:29.826722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:22.788 [2024-12-16 12:32:29.826729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:22.788 [2024-12-16 12:32:29.826735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:22.788 [2024-12-16 12:32:29.826742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:22.788 [2024-12-16 12:32:29.826748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:22.788 [2024-12-16 12:32:29.826756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:22.788 [2024-12-16 12:32:29.826761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:22.788 [2024-12-16 12:32:29.826768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:22.788 [2024-12-16 12:32:29.826774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:22.788 [2024-12-16 12:32:29.826781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:22.788 [2024-12-16 12:32:29.826787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:22.788 [2024-12-16 12:32:29.826795] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:22.788 [2024-12-16 12:32:29.826802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:22.788 [2024-12-16 12:32:29.826811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:22.788 [2024-12-16 12:32:29.826818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:22.788 [2024-12-16 12:32:29.826827] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:22.788 [2024-12-16 12:32:29.826833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:22.788 [2024-12-16 12:32:29.826840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.788 [2024-12-16 12:32:29.826846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:22.788 [2024-12-16 12:32:29.826853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:20:22.788 [2024-12-16 12:32:29.826865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.788 [2024-12-16 12:32:29.851145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.788 [2024-12-16 12:32:29.851182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:22.788 [2024-12-16 12:32:29.851192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.223 ms 00:20:22.788 [2024-12-16 12:32:29.851201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.788 [2024-12-16 12:32:29.851295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.788 [2024-12-16 12:32:29.851303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:22.788 [2024-12-16 12:32:29.851311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:22.788 [2024-12-16 12:32:29.851317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.788 [2024-12-16 12:32:29.877806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.788 [2024-12-16 12:32:29.877835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:22.788 [2024-12-16 12:32:29.877845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.469 ms 00:20:22.788 [2024-12-16 12:32:29.877851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.788 [2024-12-16 12:32:29.877898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.788 [2024-12-16 12:32:29.877905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:22.788 [2024-12-16 12:32:29.877913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:22.788 [2024-12-16 12:32:29.877919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.788 [2024-12-16 12:32:29.878329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.788 [2024-12-16 12:32:29.878343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:22.788 [2024-12-16 12:32:29.878354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:20:22.788 [2024-12-16 12:32:29.878359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.788 [2024-12-16 12:32:29.878472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.788 [2024-12-16 12:32:29.878480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:22.788 [2024-12-16 12:32:29.878488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:22.788 [2024-12-16 12:32:29.878495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.050 [2024-12-16 12:32:29.891909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.050 [2024-12-16 12:32:29.891935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:23.051 [2024-12-16 12:32:29.891945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.393 ms 00:20:23.051 [2024-12-16 12:32:29.891950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.051 [2024-12-16 12:32:29.913149] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:23.051 [2024-12-16 12:32:29.913219] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:23.051 [2024-12-16 12:32:29.913239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.051 [2024-12-16 12:32:29.913251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:23.051 [2024-12-16 12:32:29.913265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.189 ms 00:20:23.051 [2024-12-16 12:32:29.913281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.051 [2024-12-16 12:32:29.933309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.051 [2024-12-16 12:32:29.933338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:23.051 [2024-12-16 12:32:29.933350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.938 ms 00:20:23.051 [2024-12-16 12:32:29.933357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.051 [2024-12-16 12:32:29.942690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.051 [2024-12-16 12:32:29.942715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:23.051 [2024-12-16 12:32:29.942728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.269 ms 00:20:23.051 [2024-12-16 12:32:29.942734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.051 [2024-12-16 12:32:29.951591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.051 [2024-12-16 12:32:29.951615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:23.051 [2024-12-16 12:32:29.951625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.812 ms 00:20:23.051 [2024-12-16 12:32:29.951630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.051 [2024-12-16 12:32:29.952105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.051 [2024-12-16 12:32:29.952122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:23.051 [2024-12-16 12:32:29.952131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:20:23.051 [2024-12-16 12:32:29.952138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.051 [2024-12-16 12:32:30.000565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.051 [2024-12-16 12:32:30.000597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:23.051 [2024-12-16 12:32:30.000608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.408 ms 00:20:23.051 [2024-12-16 12:32:30.000614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.051 [2024-12-16 12:32:30.009286] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:23.051 [2024-12-16 12:32:30.024517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.051 [2024-12-16 12:32:30.024553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:23.051 [2024-12-16 12:32:30.024565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.830 ms 00:20:23.051 [2024-12-16 12:32:30.024573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.051 [2024-12-16 12:32:30.024647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.051 [2024-12-16 12:32:30.024656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:23.051 [2024-12-16 12:32:30.024663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:23.051 [2024-12-16 12:32:30.024671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.051 [2024-12-16 12:32:30.024717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.051 [2024-12-16 12:32:30.024727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:23.051 [2024-12-16 12:32:30.024734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:23.051 [2024-12-16 12:32:30.024744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.051 [2024-12-16 12:32:30.024762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.051 [2024-12-16 12:32:30.024770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:23.051 [2024-12-16 12:32:30.024777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:23.051 [2024-12-16 12:32:30.024787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.051 [2024-12-16 12:32:30.024816] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:23.051 [2024-12-16 12:32:30.024827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.051 [2024-12-16 12:32:30.024836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:23.051 [2024-12-16 12:32:30.024844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:23.051 [2024-12-16 12:32:30.024849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.051 [2024-12-16 12:32:30.044427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.051 [2024-12-16 12:32:30.044456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:23.051 [2024-12-16 12:32:30.044467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.556 ms 00:20:23.051 [2024-12-16 12:32:30.044474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.051 [2024-12-16 12:32:30.044554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.051 [2024-12-16 12:32:30.044563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:23.051 [2024-12-16 12:32:30.044572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:23.051 [2024-12-16 12:32:30.044580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.051 [2024-12-16 12:32:30.045395] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:23.051 [2024-12-16 12:32:30.047956] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 244.701 ms, result 0 00:20:23.051 [2024-12-16 12:32:30.049587] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:23.051 Some configs were skipped because the RPC state that can call them passed over. 00:20:23.051 12:32:30 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:23.312 [2024-12-16 12:32:30.275043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.312 [2024-12-16 12:32:30.275225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:23.312 [2024-12-16 12:32:30.275277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.532 ms 00:20:23.312 [2024-12-16 12:32:30.275298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.312 [2024-12-16 12:32:30.275337] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.825 ms, result 0 00:20:23.312 true 00:20:23.312 12:32:30 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:23.571 [2024-12-16 12:32:30.475286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.571 [2024-12-16 12:32:30.475387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:23.571 [2024-12-16 12:32:30.475432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.578 ms 00:20:23.571 [2024-12-16 12:32:30.475449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.571 [2024-12-16 12:32:30.475488] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.779 ms, result 0 00:20:23.571 true 00:20:23.571 12:32:30 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78745 00:20:23.571 12:32:30 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78745 ']' 00:20:23.571 12:32:30 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78745 00:20:23.571 12:32:30 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:23.571 12:32:30 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:23.571 12:32:30 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78745 00:20:23.571 killing process with pid 78745 00:20:23.571 12:32:30 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:23.571 12:32:30 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:23.571 12:32:30 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78745' 00:20:23.571 12:32:30 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78745 00:20:23.571 12:32:30 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78745 00:20:24.160 [2024-12-16 12:32:31.094521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.160 [2024-12-16 12:32:31.094574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:24.160 [2024-12-16 12:32:31.094587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:24.160 [2024-12-16 12:32:31.094595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.160 [2024-12-16 12:32:31.094615] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:24.160 [2024-12-16 12:32:31.096844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.160 [2024-12-16 12:32:31.096873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:24.160 [2024-12-16 12:32:31.096885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.213 ms 00:20:24.160 [2024-12-16 12:32:31.096891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.160 [2024-12-16 12:32:31.097128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.160 [2024-12-16 12:32:31.097136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:24.160 [2024-12-16 12:32:31.097145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:20:24.160 [2024-12-16 12:32:31.097151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.160 [2024-12-16 12:32:31.100800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.160 [2024-12-16 12:32:31.100965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:24.160 [2024-12-16 12:32:31.100984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.623 ms 00:20:24.160 [2024-12-16 12:32:31.100991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.160 [2024-12-16 12:32:31.106230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.161 [2024-12-16 12:32:31.106255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:24.161 [2024-12-16 12:32:31.106264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.201 ms 00:20:24.161 [2024-12-16 12:32:31.106270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.161 [2024-12-16 12:32:31.114722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.161 [2024-12-16 12:32:31.114753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:24.161 [2024-12-16 12:32:31.114765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.400 ms 00:20:24.161 [2024-12-16 12:32:31.114771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.161 [2024-12-16 12:32:31.122290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.161 [2024-12-16 12:32:31.122402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:24.161 [2024-12-16 12:32:31.122418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.485 ms 00:20:24.161 [2024-12-16 12:32:31.122424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.161 [2024-12-16 12:32:31.122551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.161 [2024-12-16 12:32:31.122560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:24.161 [2024-12-16 12:32:31.122569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:24.161 [2024-12-16 12:32:31.122575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.161 [2024-12-16 12:32:31.131346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.161 [2024-12-16 12:32:31.131370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:24.161 [2024-12-16 12:32:31.131379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.752 ms 00:20:24.161 [2024-12-16 12:32:31.131385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.161 [2024-12-16 12:32:31.139960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.161 [2024-12-16 12:32:31.139983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:24.161 [2024-12-16 12:32:31.139994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.543 ms 00:20:24.161 [2024-12-16 12:32:31.139999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.161 [2024-12-16 12:32:31.147626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.161 [2024-12-16 12:32:31.147650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:24.161 [2024-12-16 12:32:31.147658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.594 ms 00:20:24.161 [2024-12-16 12:32:31.147664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.161 [2024-12-16 12:32:31.155180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.161 [2024-12-16 12:32:31.155203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:24.161 [2024-12-16 12:32:31.155212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.464 ms 00:20:24.161 [2024-12-16 12:32:31.155218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.161 [2024-12-16 12:32:31.155262] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:24.161 [2024-12-16 12:32:31.155274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:24.161 [2024-12-16 12:32:31.155656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:24.162 [2024-12-16 12:32:31.155949] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:24.162 [2024-12-16 12:32:31.155960] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f50ddf9-d4f2-4e3f-9574-029fb265c97a 00:20:24.162 [2024-12-16 12:32:31.155969] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:24.162 [2024-12-16 12:32:31.155976] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:24.162 [2024-12-16 12:32:31.155982] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:24.162 [2024-12-16 12:32:31.155989] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:24.162 [2024-12-16 12:32:31.155994] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:24.162 [2024-12-16 12:32:31.156002] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:24.162 [2024-12-16 12:32:31.156007] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:24.162 [2024-12-16 12:32:31.156014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:24.162 [2024-12-16 12:32:31.156019] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:24.162 [2024-12-16 12:32:31.156025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.162 [2024-12-16 12:32:31.156031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:24.162 [2024-12-16 12:32:31.156039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:20:24.162 [2024-12-16 12:32:31.156044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.162 [2024-12-16 12:32:31.166401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.162 [2024-12-16 12:32:31.166425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:24.162 [2024-12-16 12:32:31.166437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.339 ms 00:20:24.162 [2024-12-16 12:32:31.166444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.162 [2024-12-16 12:32:31.166753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.162 [2024-12-16 12:32:31.166767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:24.162 [2024-12-16 12:32:31.166777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:20:24.162 [2024-12-16 12:32:31.166783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.162 [2024-12-16 12:32:31.203633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.162 [2024-12-16 12:32:31.203660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:24.162 [2024-12-16 12:32:31.203670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.162 [2024-12-16 12:32:31.203677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.162 [2024-12-16 12:32:31.203755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.162 [2024-12-16 12:32:31.203763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:24.162 [2024-12-16 12:32:31.203773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.162 [2024-12-16 12:32:31.203779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.162 [2024-12-16 12:32:31.203821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.162 [2024-12-16 12:32:31.203829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:24.162 [2024-12-16 12:32:31.203839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.162 [2024-12-16 12:32:31.203845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.162 [2024-12-16 12:32:31.203862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.162 [2024-12-16 12:32:31.203868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:24.162 [2024-12-16 12:32:31.203876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.162 [2024-12-16 12:32:31.203883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.434 [2024-12-16 12:32:31.266331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.434 [2024-12-16 12:32:31.266372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:24.434 [2024-12-16 12:32:31.266383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.434 [2024-12-16 12:32:31.266389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.434 [2024-12-16 12:32:31.318147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.434 [2024-12-16 12:32:31.318188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:24.434 [2024-12-16 12:32:31.318198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.434 [2024-12-16 12:32:31.318207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.434 [2024-12-16 12:32:31.318287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.434 [2024-12-16 12:32:31.318296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:24.434 [2024-12-16 12:32:31.318305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.434 [2024-12-16 12:32:31.318312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.434 [2024-12-16 12:32:31.318340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.434 [2024-12-16 12:32:31.318348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:24.434 [2024-12-16 12:32:31.318356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.434 [2024-12-16 12:32:31.318362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.434 [2024-12-16 12:32:31.318442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.434 [2024-12-16 12:32:31.318451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:24.434 [2024-12-16 12:32:31.318459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.434 [2024-12-16 12:32:31.318466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.434 [2024-12-16 12:32:31.318495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.434 [2024-12-16 12:32:31.318503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:24.434 [2024-12-16 12:32:31.318511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.435 [2024-12-16 12:32:31.318518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.435 [2024-12-16 12:32:31.318558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.435 [2024-12-16 12:32:31.318566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:24.435 [2024-12-16 12:32:31.318576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.435 [2024-12-16 12:32:31.318582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.435 [2024-12-16 12:32:31.318624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.435 [2024-12-16 12:32:31.318632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:24.435 [2024-12-16 12:32:31.318640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.435 [2024-12-16 12:32:31.318646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.435 [2024-12-16 12:32:31.318777] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 224.230 ms, result 0 00:20:25.007 12:32:31 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:25.007 [2024-12-16 12:32:31.951585] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:25.007 [2024-12-16 12:32:31.951708] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78792 ] 00:20:25.007 [2024-12-16 12:32:32.107975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.268 [2024-12-16 12:32:32.199351] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.530 [2024-12-16 12:32:32.432546] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:25.530 [2024-12-16 12:32:32.432608] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:25.530 [2024-12-16 12:32:32.589025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.530 [2024-12-16 12:32:32.589212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:25.530 [2024-12-16 12:32:32.589229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:25.530 [2024-12-16 12:32:32.589236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.530 [2024-12-16 12:32:32.591712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.530 [2024-12-16 12:32:32.591748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:25.530 [2024-12-16 12:32:32.591758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.457 ms 00:20:25.530 [2024-12-16 12:32:32.591764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.530 [2024-12-16 12:32:32.591844] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:25.530 [2024-12-16 12:32:32.592417] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:25.530 [2024-12-16 12:32:32.592433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.530 [2024-12-16 12:32:32.592440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:25.530 [2024-12-16 12:32:32.592447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:20:25.530 [2024-12-16 12:32:32.592454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.530 [2024-12-16 12:32:32.593854] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:25.530 [2024-12-16 12:32:32.604333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.530 [2024-12-16 12:32:32.604360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:25.530 [2024-12-16 12:32:32.604369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.481 ms 00:20:25.530 [2024-12-16 12:32:32.604376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.530 [2024-12-16 12:32:32.604450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.530 [2024-12-16 12:32:32.604459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:25.530 [2024-12-16 12:32:32.604466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:25.530 [2024-12-16 12:32:32.604472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.530 [2024-12-16 12:32:32.610875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.530 [2024-12-16 12:32:32.610899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:25.530 [2024-12-16 12:32:32.610907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.372 ms 00:20:25.530 [2024-12-16 12:32:32.610913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.530 [2024-12-16 12:32:32.610987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.530 [2024-12-16 12:32:32.610994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:25.530 [2024-12-16 12:32:32.611001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:25.530 [2024-12-16 12:32:32.611007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.530 [2024-12-16 12:32:32.611027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.530 [2024-12-16 12:32:32.611033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:25.530 [2024-12-16 12:32:32.611040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:25.530 [2024-12-16 12:32:32.611047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.530 [2024-12-16 12:32:32.611066] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:25.530 [2024-12-16 12:32:32.614086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.530 [2024-12-16 12:32:32.614110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:25.530 [2024-12-16 12:32:32.614117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.024 ms 00:20:25.530 [2024-12-16 12:32:32.614123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.530 [2024-12-16 12:32:32.614166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.530 [2024-12-16 12:32:32.614174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:25.530 [2024-12-16 12:32:32.614181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:25.530 [2024-12-16 12:32:32.614187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.530 [2024-12-16 12:32:32.614204] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:25.530 [2024-12-16 12:32:32.614222] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:25.530 [2024-12-16 12:32:32.614251] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:25.530 [2024-12-16 12:32:32.614264] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:25.530 [2024-12-16 12:32:32.614359] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:25.530 [2024-12-16 12:32:32.614369] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:25.530 [2024-12-16 12:32:32.614378] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:25.530 [2024-12-16 12:32:32.614388] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:25.530 [2024-12-16 12:32:32.614395] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:25.530 [2024-12-16 12:32:32.614402] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:25.530 [2024-12-16 12:32:32.614408] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:25.530 [2024-12-16 12:32:32.614415] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:25.530 [2024-12-16 12:32:32.614421] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:25.530 [2024-12-16 12:32:32.614427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.530 [2024-12-16 12:32:32.614434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:25.530 [2024-12-16 12:32:32.614440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:20:25.530 [2024-12-16 12:32:32.614446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.530 [2024-12-16 12:32:32.614523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.530 [2024-12-16 12:32:32.614533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:25.530 [2024-12-16 12:32:32.614539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:25.530 [2024-12-16 12:32:32.614545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.530 [2024-12-16 12:32:32.614622] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:25.530 [2024-12-16 12:32:32.614631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:25.530 [2024-12-16 12:32:32.614637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:25.530 [2024-12-16 12:32:32.614644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.530 [2024-12-16 12:32:32.614650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:25.530 [2024-12-16 12:32:32.614656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:25.530 [2024-12-16 12:32:32.614661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:25.530 [2024-12-16 12:32:32.614667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:25.530 [2024-12-16 12:32:32.614673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:25.530 [2024-12-16 12:32:32.614678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:25.530 [2024-12-16 12:32:32.614684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:25.530 [2024-12-16 12:32:32.614694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:25.530 [2024-12-16 12:32:32.614703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:25.530 [2024-12-16 12:32:32.614708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:25.530 [2024-12-16 12:32:32.614714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:25.530 [2024-12-16 12:32:32.614719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.530 [2024-12-16 12:32:32.614725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:25.530 [2024-12-16 12:32:32.614730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:25.530 [2024-12-16 12:32:32.614735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.530 [2024-12-16 12:32:32.614740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:25.530 [2024-12-16 12:32:32.614745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:25.531 [2024-12-16 12:32:32.614751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.531 [2024-12-16 12:32:32.614756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:25.531 [2024-12-16 12:32:32.614760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:25.531 [2024-12-16 12:32:32.614765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.531 [2024-12-16 12:32:32.614770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:25.531 [2024-12-16 12:32:32.614776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:25.531 [2024-12-16 12:32:32.614781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.531 [2024-12-16 12:32:32.614786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:25.531 [2024-12-16 12:32:32.614791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:25.531 [2024-12-16 12:32:32.614796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.531 [2024-12-16 12:32:32.614802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:25.531 [2024-12-16 12:32:32.614807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:25.531 [2024-12-16 12:32:32.614813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:25.531 [2024-12-16 12:32:32.614818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:25.531 [2024-12-16 12:32:32.614823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:25.531 [2024-12-16 12:32:32.614829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:25.531 [2024-12-16 12:32:32.614834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:25.531 [2024-12-16 12:32:32.614840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:25.531 [2024-12-16 12:32:32.614845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.531 [2024-12-16 12:32:32.614850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:25.531 [2024-12-16 12:32:32.614855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:25.531 [2024-12-16 12:32:32.614860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.531 [2024-12-16 12:32:32.614866] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:25.531 [2024-12-16 12:32:32.614875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:25.531 [2024-12-16 12:32:32.614882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:25.531 [2024-12-16 12:32:32.614888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.531 [2024-12-16 12:32:32.614895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:25.531 [2024-12-16 12:32:32.614900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:25.531 [2024-12-16 12:32:32.614906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:25.531 [2024-12-16 12:32:32.614912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:25.531 [2024-12-16 12:32:32.614917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:25.531 [2024-12-16 12:32:32.614922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:25.531 [2024-12-16 12:32:32.614928] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:25.531 [2024-12-16 12:32:32.614934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:25.531 [2024-12-16 12:32:32.614941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:25.531 [2024-12-16 12:32:32.614947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:25.531 [2024-12-16 12:32:32.614952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:25.531 [2024-12-16 12:32:32.614958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:25.531 [2024-12-16 12:32:32.614963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:25.531 [2024-12-16 12:32:32.614968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:25.531 [2024-12-16 12:32:32.614973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:25.531 [2024-12-16 12:32:32.614979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:25.531 [2024-12-16 12:32:32.614985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:25.531 [2024-12-16 12:32:32.614990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:25.531 [2024-12-16 12:32:32.614995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:25.531 [2024-12-16 12:32:32.615001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:25.531 [2024-12-16 12:32:32.615006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:25.531 [2024-12-16 12:32:32.615012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:25.531 [2024-12-16 12:32:32.615018] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:25.531 [2024-12-16 12:32:32.615026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:25.531 [2024-12-16 12:32:32.615033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:25.531 [2024-12-16 12:32:32.615038] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:25.531 [2024-12-16 12:32:32.615045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:25.531 [2024-12-16 12:32:32.615051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:25.531 [2024-12-16 12:32:32.615057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.531 [2024-12-16 12:32:32.615066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:25.531 [2024-12-16 12:32:32.615072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:20:25.531 [2024-12-16 12:32:32.615078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.792 [2024-12-16 12:32:32.639461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.792 [2024-12-16 12:32:32.639488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:25.792 [2024-12-16 12:32:32.639496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.327 ms 00:20:25.792 [2024-12-16 12:32:32.639503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.792 [2024-12-16 12:32:32.639598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.792 [2024-12-16 12:32:32.639605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:25.792 [2024-12-16 12:32:32.639611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:25.792 [2024-12-16 12:32:32.639618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.792 [2024-12-16 12:32:32.682947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.792 [2024-12-16 12:32:32.682980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:25.792 [2024-12-16 12:32:32.682992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.311 ms 00:20:25.792 [2024-12-16 12:32:32.682999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.792 [2024-12-16 12:32:32.683080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.792 [2024-12-16 12:32:32.683089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:25.792 [2024-12-16 12:32:32.683096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:25.792 [2024-12-16 12:32:32.683103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.792 [2024-12-16 12:32:32.683507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.792 [2024-12-16 12:32:32.683521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:25.792 [2024-12-16 12:32:32.683529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:20:25.792 [2024-12-16 12:32:32.683539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.792 [2024-12-16 12:32:32.683656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.792 [2024-12-16 12:32:32.683665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:25.792 [2024-12-16 12:32:32.683672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:20:25.792 [2024-12-16 12:32:32.683678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.792 [2024-12-16 12:32:32.696015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.792 [2024-12-16 12:32:32.696042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:25.792 [2024-12-16 12:32:32.696050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.319 ms 00:20:25.792 [2024-12-16 12:32:32.696057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.792 [2024-12-16 12:32:32.706855] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:25.792 [2024-12-16 12:32:32.706883] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:25.792 [2024-12-16 12:32:32.706893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.792 [2024-12-16 12:32:32.706900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:25.792 [2024-12-16 12:32:32.706907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.754 ms 00:20:25.792 [2024-12-16 12:32:32.706913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.792 [2024-12-16 12:32:32.725842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.792 [2024-12-16 12:32:32.725880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:25.792 [2024-12-16 12:32:32.725889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.870 ms 00:20:25.792 [2024-12-16 12:32:32.725895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.792 [2024-12-16 12:32:32.735337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.792 [2024-12-16 12:32:32.735363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:25.792 [2024-12-16 12:32:32.735371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.385 ms 00:20:25.792 [2024-12-16 12:32:32.735377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.792 [2024-12-16 12:32:32.744466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.792 [2024-12-16 12:32:32.744491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:25.792 [2024-12-16 12:32:32.744499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.046 ms 00:20:25.792 [2024-12-16 12:32:32.744504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.792 [2024-12-16 12:32:32.744979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.792 [2024-12-16 12:32:32.744996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:25.792 [2024-12-16 12:32:32.745004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:20:25.792 [2024-12-16 12:32:32.745011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.792 [2024-12-16 12:32:32.793531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.792 [2024-12-16 12:32:32.793711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:25.792 [2024-12-16 12:32:32.793728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.502 ms 00:20:25.792 [2024-12-16 12:32:32.793735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.792 [2024-12-16 12:32:32.801656] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:25.792 [2024-12-16 12:32:32.816313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.792 [2024-12-16 12:32:32.816456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:25.792 [2024-12-16 12:32:32.816470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.506 ms 00:20:25.792 [2024-12-16 12:32:32.816481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.792 [2024-12-16 12:32:32.816559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.792 [2024-12-16 12:32:32.816568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:25.792 [2024-12-16 12:32:32.816575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:25.792 [2024-12-16 12:32:32.816581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.792 [2024-12-16 12:32:32.816627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.792 [2024-12-16 12:32:32.816635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:25.792 [2024-12-16 12:32:32.816642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:25.792 [2024-12-16 12:32:32.816652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.792 [2024-12-16 12:32:32.816680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.792 [2024-12-16 12:32:32.816687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:25.792 [2024-12-16 12:32:32.816694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:25.792 [2024-12-16 12:32:32.816700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.792 [2024-12-16 12:32:32.816730] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:25.792 [2024-12-16 12:32:32.816737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.792 [2024-12-16 12:32:32.816744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:25.792 [2024-12-16 12:32:32.816751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:25.792 [2024-12-16 12:32:32.816757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.792 [2024-12-16 12:32:32.835635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.792 [2024-12-16 12:32:32.835740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:25.792 [2024-12-16 12:32:32.835755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.860 ms 00:20:25.792 [2024-12-16 12:32:32.835762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.792 [2024-12-16 12:32:32.835833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.792 [2024-12-16 12:32:32.835842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:25.792 [2024-12-16 12:32:32.835849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:25.793 [2024-12-16 12:32:32.835856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.793 [2024-12-16 12:32:32.836654] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:25.793 [2024-12-16 12:32:32.839039] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 247.373 ms, result 0 00:20:25.793 [2024-12-16 12:32:32.840395] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:25.793 [2024-12-16 12:32:32.851167] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:27.178  [2024-12-16T12:32:35.227Z] Copying: 15/256 [MB] (15 MBps) [2024-12-16T12:32:36.170Z] Copying: 26/256 [MB] (11 MBps) [2024-12-16T12:32:37.112Z] Copying: 38/256 [MB] (11 MBps) [2024-12-16T12:32:38.055Z] Copying: 49/256 [MB] (11 MBps) [2024-12-16T12:32:38.996Z] Copying: 61/256 [MB] (11 MBps) [2024-12-16T12:32:39.939Z] Copying: 72/256 [MB] (11 MBps) [2024-12-16T12:32:41.326Z] Copying: 84/256 [MB] (11 MBps) [2024-12-16T12:32:41.898Z] Copying: 95/256 [MB] (11 MBps) [2024-12-16T12:32:43.285Z] Copying: 107/256 [MB] (11 MBps) [2024-12-16T12:32:44.228Z] Copying: 119/256 [MB] (12 MBps) [2024-12-16T12:32:45.172Z] Copying: 131/256 [MB] (11 MBps) [2024-12-16T12:32:46.115Z] Copying: 147/256 [MB] (16 MBps) [2024-12-16T12:32:47.058Z] Copying: 158/256 [MB] (10 MBps) [2024-12-16T12:32:48.001Z] Copying: 169/256 [MB] (11 MBps) [2024-12-16T12:32:48.944Z] Copying: 180/256 [MB] (11 MBps) [2024-12-16T12:32:50.329Z] Copying: 200/256 [MB] (19 MBps) [2024-12-16T12:32:50.901Z] Copying: 211/256 [MB] (11 MBps) [2024-12-16T12:32:52.284Z] Copying: 222/256 [MB] (11 MBps) [2024-12-16T12:32:53.228Z] Copying: 236/256 [MB] (13 MBps) [2024-12-16T12:32:53.228Z] Copying: 253/256 [MB] (16 MBps) [2024-12-16T12:32:53.489Z] Copying: 256/256 [MB] (average 12 MBps)[2024-12-16 12:32:53.441085] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:46.383 [2024-12-16 12:32:53.451800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.383 [2024-12-16 12:32:53.451992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:46.383 [2024-12-16 12:32:53.452366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:46.383 [2024-12-16 12:32:53.452417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.383 [2024-12-16 12:32:53.452534] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:46.383 [2024-12-16 12:32:53.455920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.383 [2024-12-16 12:32:53.456043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:46.383 [2024-12-16 12:32:53.456113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.243 ms 00:20:46.383 [2024-12-16 12:32:53.456140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.383 [2024-12-16 12:32:53.456619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.383 [2024-12-16 12:32:53.456711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:46.383 [2024-12-16 12:32:53.456756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:20:46.383 [2024-12-16 12:32:53.456774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.383 [2024-12-16 12:32:53.459836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.383 [2024-12-16 12:32:53.459911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:46.383 [2024-12-16 12:32:53.459956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.032 ms 00:20:46.383 [2024-12-16 12:32:53.459974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.383 [2024-12-16 12:32:53.465385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.383 [2024-12-16 12:32:53.465466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:46.383 [2024-12-16 12:32:53.465505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.385 ms 00:20:46.383 [2024-12-16 12:32:53.465522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.383 [2024-12-16 12:32:53.484763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.383 [2024-12-16 12:32:53.484854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:46.383 [2024-12-16 12:32:53.484894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.181 ms 00:20:46.383 [2024-12-16 12:32:53.484912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.645 [2024-12-16 12:32:53.496967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.645 [2024-12-16 12:32:53.497058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:46.645 [2024-12-16 12:32:53.497104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.020 ms 00:20:46.645 [2024-12-16 12:32:53.497121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.645 [2024-12-16 12:32:53.497241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.645 [2024-12-16 12:32:53.497262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:46.645 [2024-12-16 12:32:53.497286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:46.645 [2024-12-16 12:32:53.497301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.645 [2024-12-16 12:32:53.515894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.645 [2024-12-16 12:32:53.515973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:46.645 [2024-12-16 12:32:53.516010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.571 ms 00:20:46.645 [2024-12-16 12:32:53.516027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.645 [2024-12-16 12:32:53.534483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.645 [2024-12-16 12:32:53.534567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:46.645 [2024-12-16 12:32:53.534605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.422 ms 00:20:46.645 [2024-12-16 12:32:53.534621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.645 [2024-12-16 12:32:53.552520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.645 [2024-12-16 12:32:53.552609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:46.645 [2024-12-16 12:32:53.552649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.866 ms 00:20:46.645 [2024-12-16 12:32:53.552666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.645 [2024-12-16 12:32:53.570379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.645 [2024-12-16 12:32:53.570465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:46.645 [2024-12-16 12:32:53.570504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.657 ms 00:20:46.645 [2024-12-16 12:32:53.570512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.645 [2024-12-16 12:32:53.570536] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:46.645 [2024-12-16 12:32:53.570547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:46.645 [2024-12-16 12:32:53.570557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.570997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.571003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.571009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.571015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.571021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.571027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.571033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.571039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.571045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.571050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.571056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.571062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.571068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.571074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.571079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.571085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.571092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:46.646 [2024-12-16 12:32:53.571105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:46.647 [2024-12-16 12:32:53.571110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:46.647 [2024-12-16 12:32:53.571116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:46.647 [2024-12-16 12:32:53.571122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:46.647 [2024-12-16 12:32:53.571128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:46.647 [2024-12-16 12:32:53.571134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:46.647 [2024-12-16 12:32:53.571140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:46.647 [2024-12-16 12:32:53.571153] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:46.647 [2024-12-16 12:32:53.571174] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f50ddf9-d4f2-4e3f-9574-029fb265c97a 00:20:46.647 [2024-12-16 12:32:53.571180] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:46.647 [2024-12-16 12:32:53.571186] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:46.647 [2024-12-16 12:32:53.571192] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:46.647 [2024-12-16 12:32:53.571199] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:46.647 [2024-12-16 12:32:53.571205] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:46.647 [2024-12-16 12:32:53.571211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:46.647 [2024-12-16 12:32:53.571219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:46.647 [2024-12-16 12:32:53.571224] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:46.647 [2024-12-16 12:32:53.571229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:46.647 [2024-12-16 12:32:53.571235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.647 [2024-12-16 12:32:53.571241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:46.647 [2024-12-16 12:32:53.571247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:20:46.647 [2024-12-16 12:32:53.571253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.647 [2024-12-16 12:32:53.581315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.647 [2024-12-16 12:32:53.581410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:46.647 [2024-12-16 12:32:53.581421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.047 ms 00:20:46.647 [2024-12-16 12:32:53.581427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.647 [2024-12-16 12:32:53.581731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.647 [2024-12-16 12:32:53.581739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:46.647 [2024-12-16 12:32:53.581747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:20:46.647 [2024-12-16 12:32:53.581753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.647 [2024-12-16 12:32:53.610998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.647 [2024-12-16 12:32:53.611024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:46.647 [2024-12-16 12:32:53.611033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.647 [2024-12-16 12:32:53.611043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.647 [2024-12-16 12:32:53.611096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.647 [2024-12-16 12:32:53.611103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:46.647 [2024-12-16 12:32:53.611109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.647 [2024-12-16 12:32:53.611116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.647 [2024-12-16 12:32:53.611150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.647 [2024-12-16 12:32:53.611171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:46.647 [2024-12-16 12:32:53.611179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.647 [2024-12-16 12:32:53.611184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.647 [2024-12-16 12:32:53.611202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.647 [2024-12-16 12:32:53.611208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:46.647 [2024-12-16 12:32:53.611214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.647 [2024-12-16 12:32:53.611220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.647 [2024-12-16 12:32:53.675291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.647 [2024-12-16 12:32:53.675326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:46.647 [2024-12-16 12:32:53.675335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.647 [2024-12-16 12:32:53.675342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.647 [2024-12-16 12:32:53.727301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.647 [2024-12-16 12:32:53.727332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:46.647 [2024-12-16 12:32:53.727341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.647 [2024-12-16 12:32:53.727348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.647 [2024-12-16 12:32:53.727397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.647 [2024-12-16 12:32:53.727405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:46.647 [2024-12-16 12:32:53.727412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.647 [2024-12-16 12:32:53.727418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.647 [2024-12-16 12:32:53.727443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.647 [2024-12-16 12:32:53.727455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:46.647 [2024-12-16 12:32:53.727462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.647 [2024-12-16 12:32:53.727468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.647 [2024-12-16 12:32:53.727546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.647 [2024-12-16 12:32:53.727554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:46.647 [2024-12-16 12:32:53.727561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.647 [2024-12-16 12:32:53.727567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.647 [2024-12-16 12:32:53.727593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.647 [2024-12-16 12:32:53.727602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:46.647 [2024-12-16 12:32:53.727611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.647 [2024-12-16 12:32:53.727617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.647 [2024-12-16 12:32:53.727657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.647 [2024-12-16 12:32:53.727665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:46.647 [2024-12-16 12:32:53.727671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.647 [2024-12-16 12:32:53.727677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.647 [2024-12-16 12:32:53.727716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.647 [2024-12-16 12:32:53.727727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:46.647 [2024-12-16 12:32:53.727734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.647 [2024-12-16 12:32:53.727740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.647 [2024-12-16 12:32:53.727869] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 276.078 ms, result 0 00:20:47.218 00:20:47.218 00:20:47.218 12:32:54 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:47.789 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:47.789 12:32:54 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:47.789 12:32:54 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:47.789 12:32:54 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:47.789 12:32:54 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:47.789 12:32:54 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:48.049 12:32:54 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:48.049 Process with pid 78745 is not found 00:20:48.049 12:32:54 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78745 00:20:48.049 12:32:54 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78745 ']' 00:20:48.049 12:32:54 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78745 00:20:48.049 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78745) - No such process 00:20:48.049 12:32:54 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78745 is not found' 00:20:48.049 ************************************ 00:20:48.049 END TEST ftl_trim 00:20:48.049 ************************************ 00:20:48.049 00:20:48.049 real 1m28.153s 00:20:48.049 user 1m44.248s 00:20:48.049 sys 0m14.752s 00:20:48.049 12:32:54 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:48.049 12:32:54 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:48.049 12:32:55 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:48.049 12:32:55 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:48.049 12:32:55 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:48.049 12:32:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:48.049 ************************************ 00:20:48.049 START TEST ftl_restore 00:20:48.049 ************************************ 00:20:48.049 12:32:55 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:48.049 * Looking for test storage... 00:20:48.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:48.049 12:32:55 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:48.049 12:32:55 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:20:48.049 12:32:55 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:48.310 12:32:55 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:48.310 12:32:55 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:48.310 12:32:55 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:48.310 12:32:55 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:48.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.310 --rc genhtml_branch_coverage=1 00:20:48.310 --rc genhtml_function_coverage=1 00:20:48.310 --rc genhtml_legend=1 00:20:48.310 --rc geninfo_all_blocks=1 00:20:48.310 --rc geninfo_unexecuted_blocks=1 00:20:48.310 00:20:48.310 ' 00:20:48.310 12:32:55 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:48.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.310 --rc genhtml_branch_coverage=1 00:20:48.310 --rc genhtml_function_coverage=1 00:20:48.310 --rc genhtml_legend=1 00:20:48.310 --rc geninfo_all_blocks=1 00:20:48.310 --rc geninfo_unexecuted_blocks=1 00:20:48.310 00:20:48.310 ' 00:20:48.310 12:32:55 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:48.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.310 --rc genhtml_branch_coverage=1 00:20:48.310 --rc genhtml_function_coverage=1 00:20:48.310 --rc genhtml_legend=1 00:20:48.310 --rc geninfo_all_blocks=1 00:20:48.310 --rc geninfo_unexecuted_blocks=1 00:20:48.310 00:20:48.310 ' 00:20:48.310 12:32:55 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:48.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.310 --rc genhtml_branch_coverage=1 00:20:48.310 --rc genhtml_function_coverage=1 00:20:48.310 --rc genhtml_legend=1 00:20:48.310 --rc geninfo_all_blocks=1 00:20:48.310 --rc geninfo_unexecuted_blocks=1 00:20:48.310 00:20:48.310 ' 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.vnmmEbgGfI 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79098 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79098 00:20:48.310 12:32:55 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:48.310 12:32:55 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79098 ']' 00:20:48.310 12:32:55 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.310 12:32:55 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.310 12:32:55 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.310 12:32:55 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.310 12:32:55 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:48.310 [2024-12-16 12:32:55.290165] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:48.310 [2024-12-16 12:32:55.290374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79098 ] 00:20:48.571 [2024-12-16 12:32:55.443564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.571 [2024-12-16 12:32:55.534693] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.142 12:32:56 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:49.142 12:32:56 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:49.142 12:32:56 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:49.142 12:32:56 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:49.142 12:32:56 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:49.142 12:32:56 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:49.142 12:32:56 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:49.142 12:32:56 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:49.403 12:32:56 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:49.403 12:32:56 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:49.403 12:32:56 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:49.403 12:32:56 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:49.403 12:32:56 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:49.403 12:32:56 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:49.403 12:32:56 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:49.403 12:32:56 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:49.664 12:32:56 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:49.664 { 00:20:49.664 "name": "nvme0n1", 00:20:49.664 "aliases": [ 00:20:49.664 "e74ca086-8eb9-443d-b4c8-5769b46505cb" 00:20:49.664 ], 00:20:49.664 "product_name": "NVMe disk", 00:20:49.664 "block_size": 4096, 00:20:49.664 "num_blocks": 1310720, 00:20:49.664 "uuid": "e74ca086-8eb9-443d-b4c8-5769b46505cb", 00:20:49.664 "numa_id": -1, 00:20:49.664 "assigned_rate_limits": { 00:20:49.664 "rw_ios_per_sec": 0, 00:20:49.664 "rw_mbytes_per_sec": 0, 00:20:49.664 "r_mbytes_per_sec": 0, 00:20:49.664 "w_mbytes_per_sec": 0 00:20:49.664 }, 00:20:49.664 "claimed": true, 00:20:49.664 "claim_type": "read_many_write_one", 00:20:49.664 "zoned": false, 00:20:49.664 "supported_io_types": { 00:20:49.664 "read": true, 00:20:49.664 "write": true, 00:20:49.664 "unmap": true, 00:20:49.664 "flush": true, 00:20:49.664 "reset": true, 00:20:49.664 "nvme_admin": true, 00:20:49.664 "nvme_io": true, 00:20:49.664 "nvme_io_md": false, 00:20:49.664 "write_zeroes": true, 00:20:49.664 "zcopy": false, 00:20:49.664 "get_zone_info": false, 00:20:49.664 "zone_management": false, 00:20:49.664 "zone_append": false, 00:20:49.664 "compare": true, 00:20:49.664 "compare_and_write": false, 00:20:49.664 "abort": true, 00:20:49.664 "seek_hole": false, 00:20:49.664 "seek_data": false, 00:20:49.664 "copy": true, 00:20:49.664 "nvme_iov_md": false 00:20:49.664 }, 00:20:49.664 "driver_specific": { 00:20:49.664 "nvme": [ 00:20:49.664 { 00:20:49.664 "pci_address": "0000:00:11.0", 00:20:49.664 "trid": { 00:20:49.664 "trtype": "PCIe", 00:20:49.664 "traddr": "0000:00:11.0" 00:20:49.664 }, 00:20:49.664 "ctrlr_data": { 00:20:49.664 "cntlid": 0, 00:20:49.664 "vendor_id": "0x1b36", 00:20:49.664 "model_number": "QEMU NVMe Ctrl", 00:20:49.664 "serial_number": "12341", 00:20:49.664 "firmware_revision": "8.0.0", 00:20:49.664 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:49.664 "oacs": { 00:20:49.664 "security": 0, 00:20:49.664 "format": 1, 00:20:49.664 "firmware": 0, 00:20:49.664 "ns_manage": 1 00:20:49.664 }, 00:20:49.664 "multi_ctrlr": false, 00:20:49.664 "ana_reporting": false 00:20:49.664 }, 00:20:49.664 "vs": { 00:20:49.664 "nvme_version": "1.4" 00:20:49.664 }, 00:20:49.664 "ns_data": { 00:20:49.664 "id": 1, 00:20:49.664 "can_share": false 00:20:49.664 } 00:20:49.664 } 00:20:49.664 ], 00:20:49.664 "mp_policy": "active_passive" 00:20:49.664 } 00:20:49.664 } 00:20:49.664 ]' 00:20:49.664 12:32:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:49.664 12:32:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:49.664 12:32:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:49.664 12:32:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:49.664 12:32:56 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:49.664 12:32:56 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:49.664 12:32:56 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:49.664 12:32:56 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:49.664 12:32:56 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:49.664 12:32:56 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:49.664 12:32:56 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:49.925 12:32:56 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=4a9ddcee-e232-496e-80ee-d1597a2ad774 00:20:49.925 12:32:56 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:49.925 12:32:56 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4a9ddcee-e232-496e-80ee-d1597a2ad774 00:20:50.186 12:32:57 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:50.447 12:32:57 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=d6b030e9-b8fb-4cac-9c2b-a35c27036257 00:20:50.447 12:32:57 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d6b030e9-b8fb-4cac-9c2b-a35c27036257 00:20:50.447 12:32:57 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=f03fbde4-0d67-4dce-9558-db6306ef0100 00:20:50.447 12:32:57 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:50.447 12:32:57 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f03fbde4-0d67-4dce-9558-db6306ef0100 00:20:50.447 12:32:57 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:50.447 12:32:57 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:50.447 12:32:57 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=f03fbde4-0d67-4dce-9558-db6306ef0100 00:20:50.447 12:32:57 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:50.447 12:32:57 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size f03fbde4-0d67-4dce-9558-db6306ef0100 00:20:50.447 12:32:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=f03fbde4-0d67-4dce-9558-db6306ef0100 00:20:50.447 12:32:57 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:50.447 12:32:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:50.447 12:32:57 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:50.447 12:32:57 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f03fbde4-0d67-4dce-9558-db6306ef0100 00:20:50.708 12:32:57 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:50.709 { 00:20:50.709 "name": "f03fbde4-0d67-4dce-9558-db6306ef0100", 00:20:50.709 "aliases": [ 00:20:50.709 "lvs/nvme0n1p0" 00:20:50.709 ], 00:20:50.709 "product_name": "Logical Volume", 00:20:50.709 "block_size": 4096, 00:20:50.709 "num_blocks": 26476544, 00:20:50.709 "uuid": "f03fbde4-0d67-4dce-9558-db6306ef0100", 00:20:50.709 "assigned_rate_limits": { 00:20:50.709 "rw_ios_per_sec": 0, 00:20:50.709 "rw_mbytes_per_sec": 0, 00:20:50.709 "r_mbytes_per_sec": 0, 00:20:50.709 "w_mbytes_per_sec": 0 00:20:50.709 }, 00:20:50.709 "claimed": false, 00:20:50.709 "zoned": false, 00:20:50.709 "supported_io_types": { 00:20:50.709 "read": true, 00:20:50.709 "write": true, 00:20:50.709 "unmap": true, 00:20:50.709 "flush": false, 00:20:50.709 "reset": true, 00:20:50.709 "nvme_admin": false, 00:20:50.709 "nvme_io": false, 00:20:50.709 "nvme_io_md": false, 00:20:50.709 "write_zeroes": true, 00:20:50.709 "zcopy": false, 00:20:50.709 "get_zone_info": false, 00:20:50.709 "zone_management": false, 00:20:50.709 "zone_append": false, 00:20:50.709 "compare": false, 00:20:50.709 "compare_and_write": false, 00:20:50.709 "abort": false, 00:20:50.709 "seek_hole": true, 00:20:50.709 "seek_data": true, 00:20:50.709 "copy": false, 00:20:50.709 "nvme_iov_md": false 00:20:50.709 }, 00:20:50.709 "driver_specific": { 00:20:50.709 "lvol": { 00:20:50.709 "lvol_store_uuid": "d6b030e9-b8fb-4cac-9c2b-a35c27036257", 00:20:50.709 "base_bdev": "nvme0n1", 00:20:50.709 "thin_provision": true, 00:20:50.709 "num_allocated_clusters": 0, 00:20:50.709 "snapshot": false, 00:20:50.709 "clone": false, 00:20:50.709 "esnap_clone": false 00:20:50.709 } 00:20:50.709 } 00:20:50.709 } 00:20:50.709 ]' 00:20:50.709 12:32:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:50.709 12:32:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:50.709 12:32:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:50.709 12:32:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:50.709 12:32:57 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:50.709 12:32:57 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:50.709 12:32:57 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:50.709 12:32:57 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:50.709 12:32:57 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:50.970 12:32:58 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:50.970 12:32:58 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:50.970 12:32:58 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size f03fbde4-0d67-4dce-9558-db6306ef0100 00:20:50.970 12:32:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=f03fbde4-0d67-4dce-9558-db6306ef0100 00:20:50.970 12:32:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:50.970 12:32:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:50.970 12:32:58 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:50.970 12:32:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f03fbde4-0d67-4dce-9558-db6306ef0100 00:20:51.232 12:32:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:51.232 { 00:20:51.232 "name": "f03fbde4-0d67-4dce-9558-db6306ef0100", 00:20:51.232 "aliases": [ 00:20:51.232 "lvs/nvme0n1p0" 00:20:51.232 ], 00:20:51.232 "product_name": "Logical Volume", 00:20:51.232 "block_size": 4096, 00:20:51.232 "num_blocks": 26476544, 00:20:51.232 "uuid": "f03fbde4-0d67-4dce-9558-db6306ef0100", 00:20:51.232 "assigned_rate_limits": { 00:20:51.232 "rw_ios_per_sec": 0, 00:20:51.232 "rw_mbytes_per_sec": 0, 00:20:51.232 "r_mbytes_per_sec": 0, 00:20:51.232 "w_mbytes_per_sec": 0 00:20:51.232 }, 00:20:51.232 "claimed": false, 00:20:51.232 "zoned": false, 00:20:51.232 "supported_io_types": { 00:20:51.232 "read": true, 00:20:51.232 "write": true, 00:20:51.232 "unmap": true, 00:20:51.232 "flush": false, 00:20:51.232 "reset": true, 00:20:51.232 "nvme_admin": false, 00:20:51.232 "nvme_io": false, 00:20:51.232 "nvme_io_md": false, 00:20:51.232 "write_zeroes": true, 00:20:51.232 "zcopy": false, 00:20:51.232 "get_zone_info": false, 00:20:51.232 "zone_management": false, 00:20:51.232 "zone_append": false, 00:20:51.232 "compare": false, 00:20:51.232 "compare_and_write": false, 00:20:51.232 "abort": false, 00:20:51.232 "seek_hole": true, 00:20:51.232 "seek_data": true, 00:20:51.232 "copy": false, 00:20:51.232 "nvme_iov_md": false 00:20:51.232 }, 00:20:51.232 "driver_specific": { 00:20:51.232 "lvol": { 00:20:51.232 "lvol_store_uuid": "d6b030e9-b8fb-4cac-9c2b-a35c27036257", 00:20:51.232 "base_bdev": "nvme0n1", 00:20:51.232 "thin_provision": true, 00:20:51.232 "num_allocated_clusters": 0, 00:20:51.232 "snapshot": false, 00:20:51.232 "clone": false, 00:20:51.232 "esnap_clone": false 00:20:51.232 } 00:20:51.232 } 00:20:51.232 } 00:20:51.232 ]' 00:20:51.232 12:32:58 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:51.232 12:32:58 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:51.232 12:32:58 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:51.232 12:32:58 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:51.232 12:32:58 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:51.232 12:32:58 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:51.232 12:32:58 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:51.232 12:32:58 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:51.528 12:32:58 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:51.528 12:32:58 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size f03fbde4-0d67-4dce-9558-db6306ef0100 00:20:51.528 12:32:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=f03fbde4-0d67-4dce-9558-db6306ef0100 00:20:51.528 12:32:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:51.528 12:32:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:51.528 12:32:58 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:51.528 12:32:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f03fbde4-0d67-4dce-9558-db6306ef0100 00:20:51.795 12:32:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:51.795 { 00:20:51.795 "name": "f03fbde4-0d67-4dce-9558-db6306ef0100", 00:20:51.795 "aliases": [ 00:20:51.795 "lvs/nvme0n1p0" 00:20:51.795 ], 00:20:51.795 "product_name": "Logical Volume", 00:20:51.795 "block_size": 4096, 00:20:51.795 "num_blocks": 26476544, 00:20:51.795 "uuid": "f03fbde4-0d67-4dce-9558-db6306ef0100", 00:20:51.795 "assigned_rate_limits": { 00:20:51.795 "rw_ios_per_sec": 0, 00:20:51.795 "rw_mbytes_per_sec": 0, 00:20:51.795 "r_mbytes_per_sec": 0, 00:20:51.795 "w_mbytes_per_sec": 0 00:20:51.795 }, 00:20:51.795 "claimed": false, 00:20:51.795 "zoned": false, 00:20:51.795 "supported_io_types": { 00:20:51.795 "read": true, 00:20:51.795 "write": true, 00:20:51.795 "unmap": true, 00:20:51.795 "flush": false, 00:20:51.795 "reset": true, 00:20:51.795 "nvme_admin": false, 00:20:51.795 "nvme_io": false, 00:20:51.795 "nvme_io_md": false, 00:20:51.795 "write_zeroes": true, 00:20:51.795 "zcopy": false, 00:20:51.795 "get_zone_info": false, 00:20:51.795 "zone_management": false, 00:20:51.795 "zone_append": false, 00:20:51.795 "compare": false, 00:20:51.795 "compare_and_write": false, 00:20:51.795 "abort": false, 00:20:51.795 "seek_hole": true, 00:20:51.795 "seek_data": true, 00:20:51.795 "copy": false, 00:20:51.795 "nvme_iov_md": false 00:20:51.795 }, 00:20:51.795 "driver_specific": { 00:20:51.795 "lvol": { 00:20:51.795 "lvol_store_uuid": "d6b030e9-b8fb-4cac-9c2b-a35c27036257", 00:20:51.795 "base_bdev": "nvme0n1", 00:20:51.795 "thin_provision": true, 00:20:51.795 "num_allocated_clusters": 0, 00:20:51.795 "snapshot": false, 00:20:51.795 "clone": false, 00:20:51.795 "esnap_clone": false 00:20:51.795 } 00:20:51.795 } 00:20:51.795 } 00:20:51.795 ]' 00:20:51.795 12:32:58 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:51.795 12:32:58 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:51.795 12:32:58 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:51.795 12:32:58 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:51.795 12:32:58 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:51.795 12:32:58 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:51.795 12:32:58 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:51.795 12:32:58 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d f03fbde4-0d67-4dce-9558-db6306ef0100 --l2p_dram_limit 10' 00:20:51.795 12:32:58 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:51.795 12:32:58 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:51.795 12:32:58 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:51.795 12:32:58 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:51.795 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:51.795 12:32:58 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f03fbde4-0d67-4dce-9558-db6306ef0100 --l2p_dram_limit 10 -c nvc0n1p0 00:20:52.058 [2024-12-16 12:32:58.976582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.058 [2024-12-16 12:32:58.976928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:52.058 [2024-12-16 12:32:58.976962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:52.058 [2024-12-16 12:32:58.976973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.058 [2024-12-16 12:32:58.977055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.058 [2024-12-16 12:32:58.977069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:52.058 [2024-12-16 12:32:58.977081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:52.058 [2024-12-16 12:32:58.977090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.058 [2024-12-16 12:32:58.977121] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:52.058 [2024-12-16 12:32:58.977957] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:52.058 [2024-12-16 12:32:58.977993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.058 [2024-12-16 12:32:58.978003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:52.058 [2024-12-16 12:32:58.978016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.880 ms 00:20:52.058 [2024-12-16 12:32:58.978026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.058 [2024-12-16 12:32:58.978108] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c24b070e-f700-43d0-839f-ca33a3409e22 00:20:52.058 [2024-12-16 12:32:58.980462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.058 [2024-12-16 12:32:58.980517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:52.058 [2024-12-16 12:32:58.980530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:52.058 [2024-12-16 12:32:58.980543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.058 [2024-12-16 12:32:58.993393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.058 [2024-12-16 12:32:58.993445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:52.058 [2024-12-16 12:32:58.993458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.779 ms 00:20:52.058 [2024-12-16 12:32:58.993469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.058 [2024-12-16 12:32:58.993580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.058 [2024-12-16 12:32:58.993593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:52.058 [2024-12-16 12:32:58.993602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:52.058 [2024-12-16 12:32:58.993619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.058 [2024-12-16 12:32:58.993682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.058 [2024-12-16 12:32:58.993696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:52.058 [2024-12-16 12:32:58.993705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:52.058 [2024-12-16 12:32:58.993719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.058 [2024-12-16 12:32:58.993741] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:52.058 [2024-12-16 12:32:58.998834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.058 [2024-12-16 12:32:58.998878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:52.058 [2024-12-16 12:32:58.998894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.096 ms 00:20:52.058 [2024-12-16 12:32:58.998903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.058 [2024-12-16 12:32:58.998952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.058 [2024-12-16 12:32:58.998961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:52.058 [2024-12-16 12:32:58.998973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:52.058 [2024-12-16 12:32:58.998981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.058 [2024-12-16 12:32:58.999018] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:52.058 [2024-12-16 12:32:58.999308] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:52.058 [2024-12-16 12:32:58.999335] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:52.058 [2024-12-16 12:32:58.999349] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:52.058 [2024-12-16 12:32:58.999363] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:52.058 [2024-12-16 12:32:58.999375] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:52.058 [2024-12-16 12:32:58.999387] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:52.058 [2024-12-16 12:32:58.999396] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:52.058 [2024-12-16 12:32:58.999412] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:52.058 [2024-12-16 12:32:58.999420] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:52.058 [2024-12-16 12:32:58.999431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.058 [2024-12-16 12:32:58.999447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:52.058 [2024-12-16 12:32:58.999459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:20:52.058 [2024-12-16 12:32:58.999469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.058 [2024-12-16 12:32:58.999563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.058 [2024-12-16 12:32:58.999573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:52.058 [2024-12-16 12:32:58.999585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:52.058 [2024-12-16 12:32:58.999593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.058 [2024-12-16 12:32:58.999697] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:52.058 [2024-12-16 12:32:58.999709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:52.058 [2024-12-16 12:32:58.999720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:52.058 [2024-12-16 12:32:58.999728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.058 [2024-12-16 12:32:58.999738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:52.058 [2024-12-16 12:32:58.999747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:52.058 [2024-12-16 12:32:58.999757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:52.058 [2024-12-16 12:32:58.999764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:52.058 [2024-12-16 12:32:58.999773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:52.058 [2024-12-16 12:32:58.999780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:52.058 [2024-12-16 12:32:58.999790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:52.059 [2024-12-16 12:32:58.999799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:52.059 [2024-12-16 12:32:58.999811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:52.059 [2024-12-16 12:32:58.999820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:52.059 [2024-12-16 12:32:58.999830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:52.059 [2024-12-16 12:32:58.999837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.059 [2024-12-16 12:32:58.999850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:52.059 [2024-12-16 12:32:58.999860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:52.059 [2024-12-16 12:32:58.999870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.059 [2024-12-16 12:32:58.999877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:52.059 [2024-12-16 12:32:58.999886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:52.059 [2024-12-16 12:32:58.999893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:52.059 [2024-12-16 12:32:58.999903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:52.059 [2024-12-16 12:32:58.999909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:52.059 [2024-12-16 12:32:58.999919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:52.059 [2024-12-16 12:32:58.999926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:52.059 [2024-12-16 12:32:58.999935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:52.059 [2024-12-16 12:32:58.999942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:52.059 [2024-12-16 12:32:58.999951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:52.059 [2024-12-16 12:32:58.999957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:52.059 [2024-12-16 12:32:58.999967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:52.059 [2024-12-16 12:32:58.999975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:52.059 [2024-12-16 12:32:58.999987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:52.059 [2024-12-16 12:32:58.999994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:52.059 [2024-12-16 12:32:59.000003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:52.059 [2024-12-16 12:32:59.000009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:52.059 [2024-12-16 12:32:59.000021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:52.059 [2024-12-16 12:32:59.000027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:52.059 [2024-12-16 12:32:59.000038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:52.059 [2024-12-16 12:32:59.000045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.059 [2024-12-16 12:32:59.000054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:52.059 [2024-12-16 12:32:59.000060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:52.059 [2024-12-16 12:32:59.000069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.059 [2024-12-16 12:32:59.000077] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:52.059 [2024-12-16 12:32:59.000087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:52.059 [2024-12-16 12:32:59.000094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:52.059 [2024-12-16 12:32:59.000104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.059 [2024-12-16 12:32:59.000113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:52.059 [2024-12-16 12:32:59.000125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:52.059 [2024-12-16 12:32:59.000134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:52.059 [2024-12-16 12:32:59.000144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:52.059 [2024-12-16 12:32:59.000151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:52.059 [2024-12-16 12:32:59.000444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:52.059 [2024-12-16 12:32:59.000492] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:52.059 [2024-12-16 12:32:59.000530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:52.059 [2024-12-16 12:32:59.000565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:52.059 [2024-12-16 12:32:59.000599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:52.059 [2024-12-16 12:32:59.000629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:52.059 [2024-12-16 12:32:59.000725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:52.059 [2024-12-16 12:32:59.000758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:52.059 [2024-12-16 12:32:59.000791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:52.059 [2024-12-16 12:32:59.000822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:52.059 [2024-12-16 12:32:59.000854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:52.059 [2024-12-16 12:32:59.000884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:52.059 [2024-12-16 12:32:59.000955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:52.059 [2024-12-16 12:32:59.000987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:52.059 [2024-12-16 12:32:59.001017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:52.059 [2024-12-16 12:32:59.001047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:52.059 [2024-12-16 12:32:59.001081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:52.059 [2024-12-16 12:32:59.001343] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:52.059 [2024-12-16 12:32:59.001407] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:52.059 [2024-12-16 12:32:59.001441] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:52.059 [2024-12-16 12:32:59.001477] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:52.059 [2024-12-16 12:32:59.001506] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:52.059 [2024-12-16 12:32:59.001595] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:52.059 [2024-12-16 12:32:59.001633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.059 [2024-12-16 12:32:59.001658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:52.059 [2024-12-16 12:32:59.001679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.008 ms 00:20:52.059 [2024-12-16 12:32:59.001706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.059 [2024-12-16 12:32:59.001775] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:52.059 [2024-12-16 12:32:59.001822] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:56.261 [2024-12-16 12:33:02.637463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.262 [2024-12-16 12:33:02.637632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:56.262 [2024-12-16 12:33:02.637687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3635.676 ms 00:20:56.262 [2024-12-16 12:33:02.637700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.262 [2024-12-16 12:33:02.661447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.262 [2024-12-16 12:33:02.661490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:56.262 [2024-12-16 12:33:02.661502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.575 ms 00:20:56.262 [2024-12-16 12:33:02.661511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.262 [2024-12-16 12:33:02.661623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.262 [2024-12-16 12:33:02.661635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:56.262 [2024-12-16 12:33:02.661643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:56.262 [2024-12-16 12:33:02.661657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.262 [2024-12-16 12:33:02.688406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.262 [2024-12-16 12:33:02.688561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:56.262 [2024-12-16 12:33:02.688576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.721 ms 00:20:56.262 [2024-12-16 12:33:02.688585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.262 [2024-12-16 12:33:02.688613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.262 [2024-12-16 12:33:02.688625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:56.262 [2024-12-16 12:33:02.688632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:56.262 [2024-12-16 12:33:02.688646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.262 [2024-12-16 12:33:02.689072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.262 [2024-12-16 12:33:02.689091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:56.262 [2024-12-16 12:33:02.689099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:20:56.262 [2024-12-16 12:33:02.689107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.262 [2024-12-16 12:33:02.689212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.262 [2024-12-16 12:33:02.689223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:56.262 [2024-12-16 12:33:02.689232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:20:56.262 [2024-12-16 12:33:02.689242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.262 [2024-12-16 12:33:02.702363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.262 [2024-12-16 12:33:02.702394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:56.262 [2024-12-16 12:33:02.702402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.105 ms 00:20:56.262 [2024-12-16 12:33:02.702410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.262 [2024-12-16 12:33:02.722318] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:56.262 [2024-12-16 12:33:02.725772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.262 [2024-12-16 12:33:02.725805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:56.262 [2024-12-16 12:33:02.725819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.295 ms 00:20:56.262 [2024-12-16 12:33:02.725827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.262 [2024-12-16 12:33:02.801642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.262 [2024-12-16 12:33:02.801672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:56.262 [2024-12-16 12:33:02.801685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.775 ms 00:20:56.262 [2024-12-16 12:33:02.801692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.262 [2024-12-16 12:33:02.801843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.262 [2024-12-16 12:33:02.801855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:56.262 [2024-12-16 12:33:02.801866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:20:56.262 [2024-12-16 12:33:02.801872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.262 [2024-12-16 12:33:02.820662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.262 [2024-12-16 12:33:02.820781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:56.262 [2024-12-16 12:33:02.820799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.753 ms 00:20:56.262 [2024-12-16 12:33:02.820806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.262 [2024-12-16 12:33:02.838850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.262 [2024-12-16 12:33:02.838875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:56.262 [2024-12-16 12:33:02.838886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.011 ms 00:20:56.262 [2024-12-16 12:33:02.838893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.262 [2024-12-16 12:33:02.839384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.262 [2024-12-16 12:33:02.839394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:56.262 [2024-12-16 12:33:02.839403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:20:56.262 [2024-12-16 12:33:02.839411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.262 [2024-12-16 12:33:02.903162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.262 [2024-12-16 12:33:02.903190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:56.262 [2024-12-16 12:33:02.903203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.708 ms 00:20:56.262 [2024-12-16 12:33:02.903210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.262 [2024-12-16 12:33:02.923409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.262 [2024-12-16 12:33:02.923437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:56.262 [2024-12-16 12:33:02.923448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.140 ms 00:20:56.262 [2024-12-16 12:33:02.923455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.262 [2024-12-16 12:33:02.941884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.262 [2024-12-16 12:33:02.941909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:56.262 [2024-12-16 12:33:02.941920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.396 ms 00:20:56.262 [2024-12-16 12:33:02.941926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.262 [2024-12-16 12:33:02.961138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.262 [2024-12-16 12:33:02.961174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:56.262 [2024-12-16 12:33:02.961185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.180 ms 00:20:56.262 [2024-12-16 12:33:02.961191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.262 [2024-12-16 12:33:02.961226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.262 [2024-12-16 12:33:02.961233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:56.262 [2024-12-16 12:33:02.961244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:56.262 [2024-12-16 12:33:02.961250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.262 [2024-12-16 12:33:02.961336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.262 [2024-12-16 12:33:02.961347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:56.262 [2024-12-16 12:33:02.961355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:56.262 [2024-12-16 12:33:02.961362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.262 [2024-12-16 12:33:02.962280] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3985.329 ms, result 0 00:20:56.262 { 00:20:56.262 "name": "ftl0", 00:20:56.262 "uuid": "c24b070e-f700-43d0-839f-ca33a3409e22" 00:20:56.262 } 00:20:56.262 12:33:02 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:56.262 12:33:02 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:56.262 12:33:03 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:56.262 12:33:03 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:56.262 [2024-12-16 12:33:03.357693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.262 [2024-12-16 12:33:03.357734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:56.262 [2024-12-16 12:33:03.357744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:56.262 [2024-12-16 12:33:03.357752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.262 [2024-12-16 12:33:03.357772] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:56.262 [2024-12-16 12:33:03.360024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.262 [2024-12-16 12:33:03.360048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:56.262 [2024-12-16 12:33:03.360059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.236 ms 00:20:56.262 [2024-12-16 12:33:03.360066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.262 [2024-12-16 12:33:03.360288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.262 [2024-12-16 12:33:03.360301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:56.262 [2024-12-16 12:33:03.360310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:20:56.262 [2024-12-16 12:33:03.360316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.262 [2024-12-16 12:33:03.362777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.262 [2024-12-16 12:33:03.362793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:56.262 [2024-12-16 12:33:03.362803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.448 ms 00:20:56.262 [2024-12-16 12:33:03.362810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.524 [2024-12-16 12:33:03.367432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.524 [2024-12-16 12:33:03.367455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:56.524 [2024-12-16 12:33:03.367468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.607 ms 00:20:56.524 [2024-12-16 12:33:03.367475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.524 [2024-12-16 12:33:03.386551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.524 [2024-12-16 12:33:03.386679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:56.524 [2024-12-16 12:33:03.386697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.022 ms 00:20:56.524 [2024-12-16 12:33:03.386703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.524 [2024-12-16 12:33:03.400121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.524 [2024-12-16 12:33:03.400149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:56.524 [2024-12-16 12:33:03.400170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.386 ms 00:20:56.524 [2024-12-16 12:33:03.400178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.524 [2024-12-16 12:33:03.400299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.524 [2024-12-16 12:33:03.400309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:56.525 [2024-12-16 12:33:03.400317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:20:56.525 [2024-12-16 12:33:03.400324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.525 [2024-12-16 12:33:03.418972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.525 [2024-12-16 12:33:03.418998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:56.525 [2024-12-16 12:33:03.419008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.631 ms 00:20:56.525 [2024-12-16 12:33:03.419014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.525 [2024-12-16 12:33:03.437113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.525 [2024-12-16 12:33:03.437139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:56.525 [2024-12-16 12:33:03.437149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.069 ms 00:20:56.525 [2024-12-16 12:33:03.437168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.525 [2024-12-16 12:33:03.454982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.525 [2024-12-16 12:33:03.455008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:56.525 [2024-12-16 12:33:03.455018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.781 ms 00:20:56.525 [2024-12-16 12:33:03.455024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.525 [2024-12-16 12:33:03.472519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.525 [2024-12-16 12:33:03.472636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:56.525 [2024-12-16 12:33:03.472651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.436 ms 00:20:56.525 [2024-12-16 12:33:03.472657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.525 [2024-12-16 12:33:03.472682] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:56.525 [2024-12-16 12:33:03.472694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.472994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:56.525 [2024-12-16 12:33:03.473236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:56.526 [2024-12-16 12:33:03.473430] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:56.526 [2024-12-16 12:33:03.473439] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c24b070e-f700-43d0-839f-ca33a3409e22 00:20:56.526 [2024-12-16 12:33:03.473445] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:56.526 [2024-12-16 12:33:03.473454] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:56.526 [2024-12-16 12:33:03.473463] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:56.526 [2024-12-16 12:33:03.473472] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:56.526 [2024-12-16 12:33:03.473477] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:56.526 [2024-12-16 12:33:03.473485] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:56.526 [2024-12-16 12:33:03.473497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:56.526 [2024-12-16 12:33:03.473504] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:56.526 [2024-12-16 12:33:03.473509] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:56.526 [2024-12-16 12:33:03.473516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.526 [2024-12-16 12:33:03.473522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:56.526 [2024-12-16 12:33:03.473530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.835 ms 00:20:56.526 [2024-12-16 12:33:03.473539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.526 [2024-12-16 12:33:03.483812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.526 [2024-12-16 12:33:03.483913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:56.526 [2024-12-16 12:33:03.483929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.245 ms 00:20:56.526 [2024-12-16 12:33:03.483935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.526 [2024-12-16 12:33:03.484249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.526 [2024-12-16 12:33:03.484257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:56.526 [2024-12-16 12:33:03.484268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:20:56.526 [2024-12-16 12:33:03.484275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.526 [2024-12-16 12:33:03.519502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.526 [2024-12-16 12:33:03.519532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:56.526 [2024-12-16 12:33:03.519543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.526 [2024-12-16 12:33:03.519550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.526 [2024-12-16 12:33:03.519601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.526 [2024-12-16 12:33:03.519608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:56.526 [2024-12-16 12:33:03.519618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.526 [2024-12-16 12:33:03.519624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.526 [2024-12-16 12:33:03.519703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.526 [2024-12-16 12:33:03.519712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:56.526 [2024-12-16 12:33:03.519721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.526 [2024-12-16 12:33:03.519728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.526 [2024-12-16 12:33:03.519745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.526 [2024-12-16 12:33:03.519752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:56.526 [2024-12-16 12:33:03.519760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.526 [2024-12-16 12:33:03.519767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.526 [2024-12-16 12:33:03.583952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.526 [2024-12-16 12:33:03.583990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:56.526 [2024-12-16 12:33:03.584001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.526 [2024-12-16 12:33:03.584008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.787 [2024-12-16 12:33:03.635890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.787 [2024-12-16 12:33:03.635931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:56.787 [2024-12-16 12:33:03.635943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.787 [2024-12-16 12:33:03.635952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.787 [2024-12-16 12:33:03.636028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.787 [2024-12-16 12:33:03.636036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:56.787 [2024-12-16 12:33:03.636045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.787 [2024-12-16 12:33:03.636051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.787 [2024-12-16 12:33:03.636108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.787 [2024-12-16 12:33:03.636116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:56.787 [2024-12-16 12:33:03.636125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.787 [2024-12-16 12:33:03.636131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.787 [2024-12-16 12:33:03.636228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.787 [2024-12-16 12:33:03.636237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:56.787 [2024-12-16 12:33:03.636246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.787 [2024-12-16 12:33:03.636253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.787 [2024-12-16 12:33:03.636285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.787 [2024-12-16 12:33:03.636293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:56.787 [2024-12-16 12:33:03.636301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.787 [2024-12-16 12:33:03.636307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.787 [2024-12-16 12:33:03.636346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.787 [2024-12-16 12:33:03.636353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:56.787 [2024-12-16 12:33:03.636361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.787 [2024-12-16 12:33:03.636368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.787 [2024-12-16 12:33:03.636413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.787 [2024-12-16 12:33:03.636422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:56.787 [2024-12-16 12:33:03.636430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.787 [2024-12-16 12:33:03.636436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.787 [2024-12-16 12:33:03.636562] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 278.827 ms, result 0 00:20:56.787 true 00:20:56.787 12:33:03 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79098 00:20:56.787 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79098 ']' 00:20:56.787 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79098 00:20:56.787 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:20:56.787 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.787 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79098 00:20:56.787 killing process with pid 79098 00:20:56.787 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:56.787 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:56.787 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79098' 00:20:56.787 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79098 00:20:56.787 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79098 00:21:02.078 12:33:09 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:06.278 262144+0 records in 00:21:06.278 262144+0 records out 00:21:06.278 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.84021 s, 280 MB/s 00:21:06.278 12:33:12 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:07.649 12:33:14 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:07.649 [2024-12-16 12:33:14.741886] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:21:07.649 [2024-12-16 12:33:14.741968] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79323 ] 00:21:07.908 [2024-12-16 12:33:14.896130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.168 [2024-12-16 12:33:15.017303] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.429 [2024-12-16 12:33:15.356750] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:08.429 [2024-12-16 12:33:15.356855] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:08.429 [2024-12-16 12:33:15.522761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.429 [2024-12-16 12:33:15.523060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:08.429 [2024-12-16 12:33:15.523089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:08.429 [2024-12-16 12:33:15.523101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.429 [2024-12-16 12:33:15.523210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.429 [2024-12-16 12:33:15.523227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:08.429 [2024-12-16 12:33:15.523238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:21:08.429 [2024-12-16 12:33:15.523247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.429 [2024-12-16 12:33:15.523275] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:08.429 [2024-12-16 12:33:15.524060] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:08.429 [2024-12-16 12:33:15.524093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.429 [2024-12-16 12:33:15.524104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:08.429 [2024-12-16 12:33:15.524114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:21:08.429 [2024-12-16 12:33:15.524123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.429 [2024-12-16 12:33:15.526498] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:08.690 [2024-12-16 12:33:15.542129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.690 [2024-12-16 12:33:15.542377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:08.690 [2024-12-16 12:33:15.542403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.633 ms 00:21:08.690 [2024-12-16 12:33:15.542413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.690 [2024-12-16 12:33:15.542672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.690 [2024-12-16 12:33:15.542705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:08.690 [2024-12-16 12:33:15.542717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:21:08.690 [2024-12-16 12:33:15.542725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.690 [2024-12-16 12:33:15.554700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.690 [2024-12-16 12:33:15.554751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:08.690 [2024-12-16 12:33:15.554764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.885 ms 00:21:08.690 [2024-12-16 12:33:15.554781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.690 [2024-12-16 12:33:15.554874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.690 [2024-12-16 12:33:15.554885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:08.690 [2024-12-16 12:33:15.554894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:08.690 [2024-12-16 12:33:15.554903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.690 [2024-12-16 12:33:15.554966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.690 [2024-12-16 12:33:15.554978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:08.690 [2024-12-16 12:33:15.554987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:08.690 [2024-12-16 12:33:15.554995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.690 [2024-12-16 12:33:15.555024] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:08.690 [2024-12-16 12:33:15.559703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.690 [2024-12-16 12:33:15.559750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:08.690 [2024-12-16 12:33:15.559766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.686 ms 00:21:08.690 [2024-12-16 12:33:15.559775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.690 [2024-12-16 12:33:15.559819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.690 [2024-12-16 12:33:15.559831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:08.690 [2024-12-16 12:33:15.559840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:08.690 [2024-12-16 12:33:15.559849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.690 [2024-12-16 12:33:15.559891] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:08.690 [2024-12-16 12:33:15.559920] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:08.691 [2024-12-16 12:33:15.559963] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:08.691 [2024-12-16 12:33:15.559986] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:08.691 [2024-12-16 12:33:15.560099] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:08.691 [2024-12-16 12:33:15.560113] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:08.691 [2024-12-16 12:33:15.560125] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:08.691 [2024-12-16 12:33:15.560136] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:08.691 [2024-12-16 12:33:15.560148] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:08.691 [2024-12-16 12:33:15.560201] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:08.691 [2024-12-16 12:33:15.560214] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:08.691 [2024-12-16 12:33:15.560224] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:08.691 [2024-12-16 12:33:15.560237] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:08.691 [2024-12-16 12:33:15.560247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.691 [2024-12-16 12:33:15.560256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:08.691 [2024-12-16 12:33:15.560266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:21:08.691 [2024-12-16 12:33:15.560276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.691 [2024-12-16 12:33:15.560363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.691 [2024-12-16 12:33:15.560373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:08.691 [2024-12-16 12:33:15.560382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:08.691 [2024-12-16 12:33:15.560391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.691 [2024-12-16 12:33:15.560496] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:08.691 [2024-12-16 12:33:15.560510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:08.691 [2024-12-16 12:33:15.560520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:08.691 [2024-12-16 12:33:15.560528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:08.691 [2024-12-16 12:33:15.560537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:08.691 [2024-12-16 12:33:15.560544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:08.691 [2024-12-16 12:33:15.560552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:08.691 [2024-12-16 12:33:15.560560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:08.691 [2024-12-16 12:33:15.560569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:08.691 [2024-12-16 12:33:15.560576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:08.691 [2024-12-16 12:33:15.560584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:08.691 [2024-12-16 12:33:15.560591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:08.691 [2024-12-16 12:33:15.560599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:08.691 [2024-12-16 12:33:15.560615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:08.691 [2024-12-16 12:33:15.560622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:08.691 [2024-12-16 12:33:15.560630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:08.691 [2024-12-16 12:33:15.560638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:08.691 [2024-12-16 12:33:15.560650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:08.691 [2024-12-16 12:33:15.560657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:08.691 [2024-12-16 12:33:15.560664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:08.691 [2024-12-16 12:33:15.560671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:08.691 [2024-12-16 12:33:15.560678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:08.691 [2024-12-16 12:33:15.560685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:08.691 [2024-12-16 12:33:15.560692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:08.691 [2024-12-16 12:33:15.560701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:08.691 [2024-12-16 12:33:15.560708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:08.691 [2024-12-16 12:33:15.560715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:08.691 [2024-12-16 12:33:15.560722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:08.691 [2024-12-16 12:33:15.560729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:08.691 [2024-12-16 12:33:15.560737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:08.691 [2024-12-16 12:33:15.560743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:08.691 [2024-12-16 12:33:15.560750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:08.691 [2024-12-16 12:33:15.560757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:08.691 [2024-12-16 12:33:15.560766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:08.691 [2024-12-16 12:33:15.560773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:08.691 [2024-12-16 12:33:15.560780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:08.691 [2024-12-16 12:33:15.560787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:08.691 [2024-12-16 12:33:15.560794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:08.691 [2024-12-16 12:33:15.560802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:08.691 [2024-12-16 12:33:15.560808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:08.691 [2024-12-16 12:33:15.560818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:08.691 [2024-12-16 12:33:15.560826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:08.691 [2024-12-16 12:33:15.560834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:08.691 [2024-12-16 12:33:15.560841] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:08.691 [2024-12-16 12:33:15.560850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:08.691 [2024-12-16 12:33:15.560858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:08.691 [2024-12-16 12:33:15.560865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:08.691 [2024-12-16 12:33:15.560875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:08.691 [2024-12-16 12:33:15.560882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:08.691 [2024-12-16 12:33:15.560890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:08.691 [2024-12-16 12:33:15.560898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:08.691 [2024-12-16 12:33:15.560905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:08.691 [2024-12-16 12:33:15.560912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:08.691 [2024-12-16 12:33:15.560921] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:08.691 [2024-12-16 12:33:15.560932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:08.691 [2024-12-16 12:33:15.560943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:08.691 [2024-12-16 12:33:15.560951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:08.691 [2024-12-16 12:33:15.560958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:08.691 [2024-12-16 12:33:15.560965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:08.691 [2024-12-16 12:33:15.560973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:08.691 [2024-12-16 12:33:15.560980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:08.691 [2024-12-16 12:33:15.560989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:08.691 [2024-12-16 12:33:15.560996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:08.691 [2024-12-16 12:33:15.561003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:08.691 [2024-12-16 12:33:15.561010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:08.691 [2024-12-16 12:33:15.561018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:08.691 [2024-12-16 12:33:15.561026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:08.691 [2024-12-16 12:33:15.561034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:08.691 [2024-12-16 12:33:15.561043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:08.691 [2024-12-16 12:33:15.561052] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:08.691 [2024-12-16 12:33:15.561061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:08.691 [2024-12-16 12:33:15.561070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:08.691 [2024-12-16 12:33:15.561078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:08.691 [2024-12-16 12:33:15.561086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:08.691 [2024-12-16 12:33:15.561096] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:08.691 [2024-12-16 12:33:15.561105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.691 [2024-12-16 12:33:15.561113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:08.691 [2024-12-16 12:33:15.561122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:21:08.691 [2024-12-16 12:33:15.561130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.691 [2024-12-16 12:33:15.599893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.691 [2024-12-16 12:33:15.599950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:08.691 [2024-12-16 12:33:15.599963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.701 ms 00:21:08.692 [2024-12-16 12:33:15.599978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.692 [2024-12-16 12:33:15.600077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.692 [2024-12-16 12:33:15.600087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:08.692 [2024-12-16 12:33:15.600097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:08.692 [2024-12-16 12:33:15.600106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.692 [2024-12-16 12:33:15.655905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.692 [2024-12-16 12:33:15.655962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:08.692 [2024-12-16 12:33:15.655977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.707 ms 00:21:08.692 [2024-12-16 12:33:15.655986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.692 [2024-12-16 12:33:15.656042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.692 [2024-12-16 12:33:15.656054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:08.692 [2024-12-16 12:33:15.656069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:08.692 [2024-12-16 12:33:15.656077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.692 [2024-12-16 12:33:15.656850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.692 [2024-12-16 12:33:15.656903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:08.692 [2024-12-16 12:33:15.656915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.662 ms 00:21:08.692 [2024-12-16 12:33:15.656924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.692 [2024-12-16 12:33:15.657105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.692 [2024-12-16 12:33:15.657118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:08.692 [2024-12-16 12:33:15.657132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:21:08.692 [2024-12-16 12:33:15.657141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.692 [2024-12-16 12:33:15.675664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.692 [2024-12-16 12:33:15.675717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:08.692 [2024-12-16 12:33:15.675728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.471 ms 00:21:08.692 [2024-12-16 12:33:15.675738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.692 [2024-12-16 12:33:15.691134] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:08.692 [2024-12-16 12:33:15.691198] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:08.692 [2024-12-16 12:33:15.691214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.692 [2024-12-16 12:33:15.691225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:08.692 [2024-12-16 12:33:15.691236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.353 ms 00:21:08.692 [2024-12-16 12:33:15.691244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.692 [2024-12-16 12:33:15.718491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.692 [2024-12-16 12:33:15.718554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:08.692 [2024-12-16 12:33:15.718567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.182 ms 00:21:08.692 [2024-12-16 12:33:15.718575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.692 [2024-12-16 12:33:15.732197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.692 [2024-12-16 12:33:15.732451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:08.692 [2024-12-16 12:33:15.732474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.560 ms 00:21:08.692 [2024-12-16 12:33:15.732484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.692 [2024-12-16 12:33:15.745812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.692 [2024-12-16 12:33:15.745877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:08.692 [2024-12-16 12:33:15.745889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.169 ms 00:21:08.692 [2024-12-16 12:33:15.745897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.692 [2024-12-16 12:33:15.746624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.692 [2024-12-16 12:33:15.746648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:08.692 [2024-12-16 12:33:15.746661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:21:08.692 [2024-12-16 12:33:15.746674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.953 [2024-12-16 12:33:15.822645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.953 [2024-12-16 12:33:15.822705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:08.953 [2024-12-16 12:33:15.822723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.950 ms 00:21:08.953 [2024-12-16 12:33:15.822741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.953 [2024-12-16 12:33:15.835116] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:08.953 [2024-12-16 12:33:15.839525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.953 [2024-12-16 12:33:15.839574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:08.953 [2024-12-16 12:33:15.839588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.719 ms 00:21:08.953 [2024-12-16 12:33:15.839598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.953 [2024-12-16 12:33:15.839694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.953 [2024-12-16 12:33:15.839709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:08.954 [2024-12-16 12:33:15.839720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:08.954 [2024-12-16 12:33:15.839730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.954 [2024-12-16 12:33:15.839812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.954 [2024-12-16 12:33:15.839825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:08.954 [2024-12-16 12:33:15.839835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:21:08.954 [2024-12-16 12:33:15.839843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.954 [2024-12-16 12:33:15.839865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.954 [2024-12-16 12:33:15.839875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:08.954 [2024-12-16 12:33:15.839885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:08.954 [2024-12-16 12:33:15.839894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.954 [2024-12-16 12:33:15.839938] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:08.954 [2024-12-16 12:33:15.839954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.954 [2024-12-16 12:33:15.839963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:08.954 [2024-12-16 12:33:15.839972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:08.954 [2024-12-16 12:33:15.839982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.954 [2024-12-16 12:33:15.866842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.954 [2024-12-16 12:33:15.867051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:08.954 [2024-12-16 12:33:15.867075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.838 ms 00:21:08.954 [2024-12-16 12:33:15.867093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.954 [2024-12-16 12:33:15.867206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.954 [2024-12-16 12:33:15.867218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:08.954 [2024-12-16 12:33:15.867229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:08.954 [2024-12-16 12:33:15.867238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.954 [2024-12-16 12:33:15.869436] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 346.059 ms, result 0 00:21:09.894  [2024-12-16T12:33:17.943Z] Copying: 21/1024 [MB] (21 MBps) [2024-12-16T12:33:18.884Z] Copying: 41/1024 [MB] (19 MBps) [2024-12-16T12:33:20.273Z] Copying: 61/1024 [MB] (20 MBps) [2024-12-16T12:33:21.212Z] Copying: 79/1024 [MB] (18 MBps) [2024-12-16T12:33:22.154Z] Copying: 97/1024 [MB] (17 MBps) [2024-12-16T12:33:23.097Z] Copying: 120/1024 [MB] (22 MBps) [2024-12-16T12:33:24.038Z] Copying: 139/1024 [MB] (19 MBps) [2024-12-16T12:33:24.980Z] Copying: 156/1024 [MB] (17 MBps) [2024-12-16T12:33:25.923Z] Copying: 174/1024 [MB] (17 MBps) [2024-12-16T12:33:27.308Z] Copying: 194/1024 [MB] (19 MBps) [2024-12-16T12:33:27.881Z] Copying: 205/1024 [MB] (11 MBps) [2024-12-16T12:33:29.265Z] Copying: 216/1024 [MB] (11 MBps) [2024-12-16T12:33:30.208Z] Copying: 227/1024 [MB] (11 MBps) [2024-12-16T12:33:31.150Z] Copying: 240/1024 [MB] (12 MBps) [2024-12-16T12:33:32.093Z] Copying: 250/1024 [MB] (10 MBps) [2024-12-16T12:33:33.095Z] Copying: 266280/1048576 [kB] (9860 kBps) [2024-12-16T12:33:34.040Z] Copying: 270/1024 [MB] (10 MBps) [2024-12-16T12:33:34.985Z] Copying: 287628/1048576 [kB] (10172 kBps) [2024-12-16T12:33:35.928Z] Copying: 292/1024 [MB] (11 MBps) [2024-12-16T12:33:37.311Z] Copying: 302/1024 [MB] (10 MBps) [2024-12-16T12:33:37.887Z] Copying: 313/1024 [MB] (11 MBps) [2024-12-16T12:33:39.283Z] Copying: 324/1024 [MB] (11 MBps) [2024-12-16T12:33:40.224Z] Copying: 335/1024 [MB] (11 MBps) [2024-12-16T12:33:41.167Z] Copying: 347/1024 [MB] (11 MBps) [2024-12-16T12:33:42.107Z] Copying: 357/1024 [MB] (10 MBps) [2024-12-16T12:33:43.050Z] Copying: 369/1024 [MB] (11 MBps) [2024-12-16T12:33:43.991Z] Copying: 380/1024 [MB] (11 MBps) [2024-12-16T12:33:44.932Z] Copying: 391/1024 [MB] (10 MBps) [2024-12-16T12:33:46.318Z] Copying: 403/1024 [MB] (11 MBps) [2024-12-16T12:33:46.896Z] Copying: 414/1024 [MB] (11 MBps) [2024-12-16T12:33:48.282Z] Copying: 425/1024 [MB] (11 MBps) [2024-12-16T12:33:49.228Z] Copying: 437/1024 [MB] (11 MBps) [2024-12-16T12:33:50.173Z] Copying: 448/1024 [MB] (11 MBps) [2024-12-16T12:33:51.115Z] Copying: 459/1024 [MB] (11 MBps) [2024-12-16T12:33:52.059Z] Copying: 470/1024 [MB] (11 MBps) [2024-12-16T12:33:53.002Z] Copying: 481/1024 [MB] (10 MBps) [2024-12-16T12:33:53.946Z] Copying: 492/1024 [MB] (10 MBps) [2024-12-16T12:33:54.890Z] Copying: 503/1024 [MB] (11 MBps) [2024-12-16T12:33:56.277Z] Copying: 514/1024 [MB] (11 MBps) [2024-12-16T12:33:57.221Z] Copying: 526/1024 [MB] (11 MBps) [2024-12-16T12:33:58.165Z] Copying: 537/1024 [MB] (11 MBps) [2024-12-16T12:33:59.108Z] Copying: 549/1024 [MB] (11 MBps) [2024-12-16T12:34:00.052Z] Copying: 560/1024 [MB] (11 MBps) [2024-12-16T12:34:00.994Z] Copying: 571/1024 [MB] (11 MBps) [2024-12-16T12:34:01.998Z] Copying: 585/1024 [MB] (13 MBps) [2024-12-16T12:34:02.961Z] Copying: 596/1024 [MB] (11 MBps) [2024-12-16T12:34:03.904Z] Copying: 608/1024 [MB] (11 MBps) [2024-12-16T12:34:05.289Z] Copying: 619/1024 [MB] (11 MBps) [2024-12-16T12:34:06.229Z] Copying: 630/1024 [MB] (11 MBps) [2024-12-16T12:34:07.171Z] Copying: 642/1024 [MB] (11 MBps) [2024-12-16T12:34:08.114Z] Copying: 653/1024 [MB] (11 MBps) [2024-12-16T12:34:09.058Z] Copying: 664/1024 [MB] (11 MBps) [2024-12-16T12:34:09.998Z] Copying: 676/1024 [MB] (11 MBps) [2024-12-16T12:34:10.941Z] Copying: 687/1024 [MB] (11 MBps) [2024-12-16T12:34:11.888Z] Copying: 698/1024 [MB] (11 MBps) [2024-12-16T12:34:13.275Z] Copying: 709/1024 [MB] (10 MBps) [2024-12-16T12:34:14.220Z] Copying: 720/1024 [MB] (10 MBps) [2024-12-16T12:34:15.163Z] Copying: 731/1024 [MB] (10 MBps) [2024-12-16T12:34:16.106Z] Copying: 742/1024 [MB] (11 MBps) [2024-12-16T12:34:17.048Z] Copying: 753/1024 [MB] (11 MBps) [2024-12-16T12:34:17.993Z] Copying: 764/1024 [MB] (11 MBps) [2024-12-16T12:34:18.935Z] Copying: 775/1024 [MB] (10 MBps) [2024-12-16T12:34:20.319Z] Copying: 786/1024 [MB] (11 MBps) [2024-12-16T12:34:20.891Z] Copying: 798/1024 [MB] (11 MBps) [2024-12-16T12:34:22.277Z] Copying: 808/1024 [MB] (10 MBps) [2024-12-16T12:34:23.219Z] Copying: 819/1024 [MB] (11 MBps) [2024-12-16T12:34:24.162Z] Copying: 830/1024 [MB] (11 MBps) [2024-12-16T12:34:25.105Z] Copying: 842/1024 [MB] (11 MBps) [2024-12-16T12:34:26.046Z] Copying: 853/1024 [MB] (11 MBps) [2024-12-16T12:34:26.987Z] Copying: 864/1024 [MB] (11 MBps) [2024-12-16T12:34:27.928Z] Copying: 875/1024 [MB] (11 MBps) [2024-12-16T12:34:29.315Z] Copying: 887/1024 [MB] (11 MBps) [2024-12-16T12:34:29.887Z] Copying: 897/1024 [MB] (10 MBps) [2024-12-16T12:34:31.324Z] Copying: 908/1024 [MB] (10 MBps) [2024-12-16T12:34:31.896Z] Copying: 919/1024 [MB] (10 MBps) [2024-12-16T12:34:33.281Z] Copying: 930/1024 [MB] (11 MBps) [2024-12-16T12:34:34.224Z] Copying: 941/1024 [MB] (11 MBps) [2024-12-16T12:34:35.169Z] Copying: 952/1024 [MB] (11 MBps) [2024-12-16T12:34:36.112Z] Copying: 964/1024 [MB] (11 MBps) [2024-12-16T12:34:37.055Z] Copying: 975/1024 [MB] (10 MBps) [2024-12-16T12:34:37.997Z] Copying: 986/1024 [MB] (11 MBps) [2024-12-16T12:34:38.942Z] Copying: 998/1024 [MB] (11 MBps) [2024-12-16T12:34:39.886Z] Copying: 1010/1024 [MB] (11 MBps) [2024-12-16T12:34:40.147Z] Copying: 1021/1024 [MB] (11 MBps) [2024-12-16T12:34:40.147Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-12-16 12:34:40.092292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.041 [2024-12-16 12:34:40.092344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:33.041 [2024-12-16 12:34:40.092356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:33.041 [2024-12-16 12:34:40.092364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.041 [2024-12-16 12:34:40.092381] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:33.041 [2024-12-16 12:34:40.094638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.041 [2024-12-16 12:34:40.094796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:33.041 [2024-12-16 12:34:40.094811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.245 ms 00:22:33.041 [2024-12-16 12:34:40.094822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.041 [2024-12-16 12:34:40.096989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.041 [2024-12-16 12:34:40.097012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:33.041 [2024-12-16 12:34:40.097021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.145 ms 00:22:33.041 [2024-12-16 12:34:40.097028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.041 [2024-12-16 12:34:40.112070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.041 [2024-12-16 12:34:40.112098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:33.041 [2024-12-16 12:34:40.112107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.030 ms 00:22:33.041 [2024-12-16 12:34:40.112113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.041 [2024-12-16 12:34:40.116769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.041 [2024-12-16 12:34:40.116792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:33.041 [2024-12-16 12:34:40.116799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.627 ms 00:22:33.041 [2024-12-16 12:34:40.116806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.041 [2024-12-16 12:34:40.136341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.041 [2024-12-16 12:34:40.136460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:33.041 [2024-12-16 12:34:40.136474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.502 ms 00:22:33.041 [2024-12-16 12:34:40.136481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.304 [2024-12-16 12:34:40.148625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.304 [2024-12-16 12:34:40.148653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:33.304 [2024-12-16 12:34:40.148663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.119 ms 00:22:33.304 [2024-12-16 12:34:40.148670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.304 [2024-12-16 12:34:40.148763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.304 [2024-12-16 12:34:40.148774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:33.304 [2024-12-16 12:34:40.148781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:33.304 [2024-12-16 12:34:40.148788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.304 [2024-12-16 12:34:40.167734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.304 [2024-12-16 12:34:40.167840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:33.304 [2024-12-16 12:34:40.167852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.936 ms 00:22:33.304 [2024-12-16 12:34:40.167858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.304 [2024-12-16 12:34:40.186205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.304 [2024-12-16 12:34:40.186305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:33.304 [2024-12-16 12:34:40.186317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.324 ms 00:22:33.304 [2024-12-16 12:34:40.186322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.304 [2024-12-16 12:34:40.204068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.304 [2024-12-16 12:34:40.204092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:33.304 [2024-12-16 12:34:40.204100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.723 ms 00:22:33.304 [2024-12-16 12:34:40.204106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.304 [2024-12-16 12:34:40.221695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.304 [2024-12-16 12:34:40.221794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:33.304 [2024-12-16 12:34:40.221807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.544 ms 00:22:33.304 [2024-12-16 12:34:40.221812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.304 [2024-12-16 12:34:40.221835] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:33.304 [2024-12-16 12:34:40.221846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.221996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.222001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.222008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.222014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.222021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.222027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.222033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.222039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.222044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.222050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.222055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.222064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.222070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:33.304 [2024-12-16 12:34:40.222076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:33.305 [2024-12-16 12:34:40.222465] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:33.305 [2024-12-16 12:34:40.222473] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c24b070e-f700-43d0-839f-ca33a3409e22 00:22:33.305 [2024-12-16 12:34:40.222480] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:33.305 [2024-12-16 12:34:40.222485] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:33.305 [2024-12-16 12:34:40.222491] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:33.305 [2024-12-16 12:34:40.222497] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:33.305 [2024-12-16 12:34:40.222503] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:33.305 [2024-12-16 12:34:40.222515] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:33.305 [2024-12-16 12:34:40.222520] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:33.305 [2024-12-16 12:34:40.222525] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:33.305 [2024-12-16 12:34:40.222530] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:33.305 [2024-12-16 12:34:40.222535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.305 [2024-12-16 12:34:40.222541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:33.305 [2024-12-16 12:34:40.222548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:22:33.305 [2024-12-16 12:34:40.222553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.305 [2024-12-16 12:34:40.232624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.305 [2024-12-16 12:34:40.232719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:33.305 [2024-12-16 12:34:40.232730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.056 ms 00:22:33.305 [2024-12-16 12:34:40.232736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.305 [2024-12-16 12:34:40.233025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.305 [2024-12-16 12:34:40.233033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:33.305 [2024-12-16 12:34:40.233040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:22:33.305 [2024-12-16 12:34:40.233051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.305 [2024-12-16 12:34:40.260359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.305 [2024-12-16 12:34:40.260386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:33.305 [2024-12-16 12:34:40.260395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.305 [2024-12-16 12:34:40.260402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.305 [2024-12-16 12:34:40.260444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.305 [2024-12-16 12:34:40.260451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:33.305 [2024-12-16 12:34:40.260457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.305 [2024-12-16 12:34:40.260467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.305 [2024-12-16 12:34:40.260512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.305 [2024-12-16 12:34:40.260519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:33.305 [2024-12-16 12:34:40.260526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.305 [2024-12-16 12:34:40.260533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.305 [2024-12-16 12:34:40.260545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.305 [2024-12-16 12:34:40.260552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:33.306 [2024-12-16 12:34:40.260558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.306 [2024-12-16 12:34:40.260563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.306 [2024-12-16 12:34:40.323243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.306 [2024-12-16 12:34:40.323396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:33.306 [2024-12-16 12:34:40.323410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.306 [2024-12-16 12:34:40.323418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.306 [2024-12-16 12:34:40.374936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.306 [2024-12-16 12:34:40.375075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:33.306 [2024-12-16 12:34:40.375089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.306 [2024-12-16 12:34:40.375101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.306 [2024-12-16 12:34:40.375192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.306 [2024-12-16 12:34:40.375201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:33.306 [2024-12-16 12:34:40.375207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.306 [2024-12-16 12:34:40.375214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.306 [2024-12-16 12:34:40.375242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.306 [2024-12-16 12:34:40.375249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:33.306 [2024-12-16 12:34:40.375255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.306 [2024-12-16 12:34:40.375262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.306 [2024-12-16 12:34:40.375343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.306 [2024-12-16 12:34:40.375351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:33.306 [2024-12-16 12:34:40.375358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.306 [2024-12-16 12:34:40.375364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.306 [2024-12-16 12:34:40.375391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.306 [2024-12-16 12:34:40.375399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:33.306 [2024-12-16 12:34:40.375406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.306 [2024-12-16 12:34:40.375411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.306 [2024-12-16 12:34:40.375447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.306 [2024-12-16 12:34:40.375458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:33.306 [2024-12-16 12:34:40.375464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.306 [2024-12-16 12:34:40.375471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.306 [2024-12-16 12:34:40.375511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.306 [2024-12-16 12:34:40.375519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:33.306 [2024-12-16 12:34:40.375525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.306 [2024-12-16 12:34:40.375531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.306 [2024-12-16 12:34:40.375640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 283.320 ms, result 0 00:22:34.248 00:22:34.248 00:22:34.248 12:34:41 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:34.248 [2024-12-16 12:34:41.118012] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:34.248 [2024-12-16 12:34:41.118475] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80214 ] 00:22:34.248 [2024-12-16 12:34:41.272184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.509 [2024-12-16 12:34:41.357517] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.509 [2024-12-16 12:34:41.590452] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:34.509 [2024-12-16 12:34:41.590511] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:34.770 [2024-12-16 12:34:41.745827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.770 [2024-12-16 12:34:41.745867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:34.770 [2024-12-16 12:34:41.745879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:34.770 [2024-12-16 12:34:41.745886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.770 [2024-12-16 12:34:41.745926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.770 [2024-12-16 12:34:41.745936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:34.770 [2024-12-16 12:34:41.745943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:34.770 [2024-12-16 12:34:41.745949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.770 [2024-12-16 12:34:41.745963] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:34.771 [2024-12-16 12:34:41.746545] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:34.771 [2024-12-16 12:34:41.746560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.771 [2024-12-16 12:34:41.746567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:34.771 [2024-12-16 12:34:41.746574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:22:34.771 [2024-12-16 12:34:41.746581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.771 [2024-12-16 12:34:41.747823] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:34.771 [2024-12-16 12:34:41.758603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.771 [2024-12-16 12:34:41.758632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:34.771 [2024-12-16 12:34:41.758642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.781 ms 00:22:34.771 [2024-12-16 12:34:41.758648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.771 [2024-12-16 12:34:41.758702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.771 [2024-12-16 12:34:41.758710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:34.771 [2024-12-16 12:34:41.758716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:34.771 [2024-12-16 12:34:41.758722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.771 [2024-12-16 12:34:41.765002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.771 [2024-12-16 12:34:41.765150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:34.771 [2024-12-16 12:34:41.765173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.238 ms 00:22:34.771 [2024-12-16 12:34:41.765184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.771 [2024-12-16 12:34:41.765241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.771 [2024-12-16 12:34:41.765248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:34.771 [2024-12-16 12:34:41.765255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:22:34.771 [2024-12-16 12:34:41.765261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.771 [2024-12-16 12:34:41.765302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.771 [2024-12-16 12:34:41.765311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:34.771 [2024-12-16 12:34:41.765317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:34.771 [2024-12-16 12:34:41.765323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.771 [2024-12-16 12:34:41.765341] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:34.771 [2024-12-16 12:34:41.768368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.771 [2024-12-16 12:34:41.768468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:34.771 [2024-12-16 12:34:41.768485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.031 ms 00:22:34.771 [2024-12-16 12:34:41.768491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.771 [2024-12-16 12:34:41.768523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.771 [2024-12-16 12:34:41.768530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:34.771 [2024-12-16 12:34:41.768537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:34.771 [2024-12-16 12:34:41.768543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.771 [2024-12-16 12:34:41.768557] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:34.771 [2024-12-16 12:34:41.768575] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:34.771 [2024-12-16 12:34:41.768603] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:34.771 [2024-12-16 12:34:41.768618] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:34.771 [2024-12-16 12:34:41.768701] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:34.771 [2024-12-16 12:34:41.768711] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:34.771 [2024-12-16 12:34:41.768719] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:34.771 [2024-12-16 12:34:41.768727] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:34.771 [2024-12-16 12:34:41.768734] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:34.771 [2024-12-16 12:34:41.768740] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:34.771 [2024-12-16 12:34:41.768746] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:34.771 [2024-12-16 12:34:41.768751] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:34.771 [2024-12-16 12:34:41.768760] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:34.771 [2024-12-16 12:34:41.768766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.771 [2024-12-16 12:34:41.768772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:34.771 [2024-12-16 12:34:41.768778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:22:34.771 [2024-12-16 12:34:41.768784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.771 [2024-12-16 12:34:41.768847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.771 [2024-12-16 12:34:41.768854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:34.771 [2024-12-16 12:34:41.768860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:34.771 [2024-12-16 12:34:41.768865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.771 [2024-12-16 12:34:41.768941] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:34.771 [2024-12-16 12:34:41.768949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:34.771 [2024-12-16 12:34:41.768956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:34.771 [2024-12-16 12:34:41.768962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:34.771 [2024-12-16 12:34:41.768969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:34.771 [2024-12-16 12:34:41.768974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:34.771 [2024-12-16 12:34:41.768981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:34.771 [2024-12-16 12:34:41.768987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:34.771 [2024-12-16 12:34:41.768993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:34.771 [2024-12-16 12:34:41.768998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:34.771 [2024-12-16 12:34:41.769005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:34.771 [2024-12-16 12:34:41.769011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:34.771 [2024-12-16 12:34:41.769016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:34.771 [2024-12-16 12:34:41.769027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:34.771 [2024-12-16 12:34:41.769033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:34.771 [2024-12-16 12:34:41.769038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:34.771 [2024-12-16 12:34:41.769043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:34.771 [2024-12-16 12:34:41.769049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:34.771 [2024-12-16 12:34:41.769053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:34.771 [2024-12-16 12:34:41.769059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:34.771 [2024-12-16 12:34:41.769063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:34.771 [2024-12-16 12:34:41.769068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:34.771 [2024-12-16 12:34:41.769073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:34.771 [2024-12-16 12:34:41.769078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:34.771 [2024-12-16 12:34:41.769083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:34.771 [2024-12-16 12:34:41.769088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:34.771 [2024-12-16 12:34:41.769093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:34.771 [2024-12-16 12:34:41.769098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:34.771 [2024-12-16 12:34:41.769102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:34.771 [2024-12-16 12:34:41.769108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:34.771 [2024-12-16 12:34:41.769113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:34.771 [2024-12-16 12:34:41.769118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:34.771 [2024-12-16 12:34:41.769123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:34.771 [2024-12-16 12:34:41.769128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:34.771 [2024-12-16 12:34:41.769133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:34.771 [2024-12-16 12:34:41.769138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:34.771 [2024-12-16 12:34:41.769143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:34.771 [2024-12-16 12:34:41.769149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:34.771 [2024-12-16 12:34:41.769177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:34.771 [2024-12-16 12:34:41.769183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:34.771 [2024-12-16 12:34:41.769188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:34.771 [2024-12-16 12:34:41.769193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:34.771 [2024-12-16 12:34:41.769200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:34.771 [2024-12-16 12:34:41.769206] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:34.771 [2024-12-16 12:34:41.769212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:34.771 [2024-12-16 12:34:41.769218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:34.771 [2024-12-16 12:34:41.769224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:34.771 [2024-12-16 12:34:41.769231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:34.771 [2024-12-16 12:34:41.769237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:34.771 [2024-12-16 12:34:41.769242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:34.771 [2024-12-16 12:34:41.769248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:34.772 [2024-12-16 12:34:41.769253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:34.772 [2024-12-16 12:34:41.769258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:34.772 [2024-12-16 12:34:41.769265] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:34.772 [2024-12-16 12:34:41.769272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:34.772 [2024-12-16 12:34:41.769282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:34.772 [2024-12-16 12:34:41.769288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:34.772 [2024-12-16 12:34:41.769293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:34.772 [2024-12-16 12:34:41.769298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:34.772 [2024-12-16 12:34:41.769304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:34.772 [2024-12-16 12:34:41.769309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:34.772 [2024-12-16 12:34:41.769314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:34.772 [2024-12-16 12:34:41.769319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:34.772 [2024-12-16 12:34:41.769324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:34.772 [2024-12-16 12:34:41.769329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:34.772 [2024-12-16 12:34:41.769336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:34.772 [2024-12-16 12:34:41.769341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:34.772 [2024-12-16 12:34:41.769346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:34.772 [2024-12-16 12:34:41.769353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:34.772 [2024-12-16 12:34:41.769358] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:34.772 [2024-12-16 12:34:41.769372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:34.772 [2024-12-16 12:34:41.769379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:34.772 [2024-12-16 12:34:41.769396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:34.772 [2024-12-16 12:34:41.769402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:34.772 [2024-12-16 12:34:41.769408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:34.772 [2024-12-16 12:34:41.769415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.772 [2024-12-16 12:34:41.769421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:34.772 [2024-12-16 12:34:41.769427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:22:34.772 [2024-12-16 12:34:41.769433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.772 [2024-12-16 12:34:41.793621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.772 [2024-12-16 12:34:41.793651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:34.772 [2024-12-16 12:34:41.793661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.144 ms 00:22:34.772 [2024-12-16 12:34:41.793670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.772 [2024-12-16 12:34:41.793734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.772 [2024-12-16 12:34:41.793741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:34.772 [2024-12-16 12:34:41.793748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:22:34.772 [2024-12-16 12:34:41.793754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.772 [2024-12-16 12:34:41.834110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.772 [2024-12-16 12:34:41.834143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:34.772 [2024-12-16 12:34:41.834153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.314 ms 00:22:34.772 [2024-12-16 12:34:41.834171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.772 [2024-12-16 12:34:41.834206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.772 [2024-12-16 12:34:41.834214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:34.772 [2024-12-16 12:34:41.834224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:34.772 [2024-12-16 12:34:41.834230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.772 [2024-12-16 12:34:41.834641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.772 [2024-12-16 12:34:41.834663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:34.772 [2024-12-16 12:34:41.834671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:22:34.772 [2024-12-16 12:34:41.834678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.772 [2024-12-16 12:34:41.834791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.772 [2024-12-16 12:34:41.834800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:34.772 [2024-12-16 12:34:41.834806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:22:34.772 [2024-12-16 12:34:41.834814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.772 [2024-12-16 12:34:41.846739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.772 [2024-12-16 12:34:41.846764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:34.772 [2024-12-16 12:34:41.846774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.908 ms 00:22:34.772 [2024-12-16 12:34:41.846781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.772 [2024-12-16 12:34:41.857531] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:34.772 [2024-12-16 12:34:41.857558] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:34.772 [2024-12-16 12:34:41.857569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.772 [2024-12-16 12:34:41.857576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:34.772 [2024-12-16 12:34:41.857583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.696 ms 00:22:34.772 [2024-12-16 12:34:41.857589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.034 [2024-12-16 12:34:41.876518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.034 [2024-12-16 12:34:41.876681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:35.034 [2024-12-16 12:34:41.876694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.842 ms 00:22:35.034 [2024-12-16 12:34:41.876701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.034 [2024-12-16 12:34:41.886152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.034 [2024-12-16 12:34:41.886189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:35.034 [2024-12-16 12:34:41.886196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.417 ms 00:22:35.034 [2024-12-16 12:34:41.886202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.034 [2024-12-16 12:34:41.895324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.034 [2024-12-16 12:34:41.895416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:35.034 [2024-12-16 12:34:41.895428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.095 ms 00:22:35.034 [2024-12-16 12:34:41.895434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.034 [2024-12-16 12:34:41.895902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.034 [2024-12-16 12:34:41.895920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:35.034 [2024-12-16 12:34:41.895930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:22:35.034 [2024-12-16 12:34:41.895936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.034 [2024-12-16 12:34:41.944397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.034 [2024-12-16 12:34:41.944432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:35.034 [2024-12-16 12:34:41.944447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.445 ms 00:22:35.034 [2024-12-16 12:34:41.944454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.034 [2024-12-16 12:34:41.953069] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:35.034 [2024-12-16 12:34:41.955533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.034 [2024-12-16 12:34:41.955559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:35.034 [2024-12-16 12:34:41.955569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.044 ms 00:22:35.034 [2024-12-16 12:34:41.955576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.034 [2024-12-16 12:34:41.955631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.034 [2024-12-16 12:34:41.955640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:35.034 [2024-12-16 12:34:41.955647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:35.034 [2024-12-16 12:34:41.955655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.034 [2024-12-16 12:34:41.955730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.034 [2024-12-16 12:34:41.955739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:35.034 [2024-12-16 12:34:41.955746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:35.034 [2024-12-16 12:34:41.955753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.034 [2024-12-16 12:34:41.955769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.034 [2024-12-16 12:34:41.955776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:35.034 [2024-12-16 12:34:41.955783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:35.034 [2024-12-16 12:34:41.955789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.034 [2024-12-16 12:34:41.955822] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:35.034 [2024-12-16 12:34:41.955830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.034 [2024-12-16 12:34:41.955837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:35.034 [2024-12-16 12:34:41.955843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:35.034 [2024-12-16 12:34:41.955850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.034 [2024-12-16 12:34:41.974631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.034 [2024-12-16 12:34:41.974742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:35.034 [2024-12-16 12:34:41.974759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.767 ms 00:22:35.034 [2024-12-16 12:34:41.974766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.034 [2024-12-16 12:34:41.974820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.034 [2024-12-16 12:34:41.974828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:35.034 [2024-12-16 12:34:41.974835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:35.034 [2024-12-16 12:34:41.974842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.034 [2024-12-16 12:34:41.975796] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 229.596 ms, result 0 00:22:36.422  [2024-12-16T12:34:44.472Z] Copying: 11/1024 [MB] (11 MBps) [2024-12-16T12:34:45.415Z] Copying: 23/1024 [MB] (11 MBps) [2024-12-16T12:34:46.361Z] Copying: 34/1024 [MB] (11 MBps) [2024-12-16T12:34:47.304Z] Copying: 45/1024 [MB] (10 MBps) [2024-12-16T12:34:48.248Z] Copying: 55/1024 [MB] (10 MBps) [2024-12-16T12:34:49.192Z] Copying: 66/1024 [MB] (10 MBps) [2024-12-16T12:34:50.134Z] Copying: 77/1024 [MB] (11 MBps) [2024-12-16T12:34:51.522Z] Copying: 89/1024 [MB] (11 MBps) [2024-12-16T12:34:52.465Z] Copying: 101/1024 [MB] (11 MBps) [2024-12-16T12:34:53.407Z] Copying: 111/1024 [MB] (10 MBps) [2024-12-16T12:34:54.351Z] Copying: 123/1024 [MB] (11 MBps) [2024-12-16T12:34:55.295Z] Copying: 135/1024 [MB] (11 MBps) [2024-12-16T12:34:56.237Z] Copying: 146/1024 [MB] (11 MBps) [2024-12-16T12:34:57.181Z] Copying: 158/1024 [MB] (11 MBps) [2024-12-16T12:34:58.126Z] Copying: 170/1024 [MB] (11 MBps) [2024-12-16T12:34:59.555Z] Copying: 180/1024 [MB] (10 MBps) [2024-12-16T12:35:00.128Z] Copying: 191/1024 [MB] (10 MBps) [2024-12-16T12:35:01.514Z] Copying: 202/1024 [MB] (11 MBps) [2024-12-16T12:35:02.459Z] Copying: 213/1024 [MB] (10 MBps) [2024-12-16T12:35:03.403Z] Copying: 224/1024 [MB] (10 MBps) [2024-12-16T12:35:04.348Z] Copying: 235/1024 [MB] (11 MBps) [2024-12-16T12:35:05.298Z] Copying: 247/1024 [MB] (11 MBps) [2024-12-16T12:35:06.248Z] Copying: 258/1024 [MB] (11 MBps) [2024-12-16T12:35:07.193Z] Copying: 270/1024 [MB] (11 MBps) [2024-12-16T12:35:08.137Z] Copying: 281/1024 [MB] (11 MBps) [2024-12-16T12:35:09.526Z] Copying: 293/1024 [MB] (11 MBps) [2024-12-16T12:35:10.468Z] Copying: 303/1024 [MB] (10 MBps) [2024-12-16T12:35:11.413Z] Copying: 315/1024 [MB] (11 MBps) [2024-12-16T12:35:12.358Z] Copying: 326/1024 [MB] (11 MBps) [2024-12-16T12:35:13.305Z] Copying: 338/1024 [MB] (11 MBps) [2024-12-16T12:35:14.250Z] Copying: 349/1024 [MB] (11 MBps) [2024-12-16T12:35:15.197Z] Copying: 360/1024 [MB] (10 MBps) [2024-12-16T12:35:16.143Z] Copying: 371/1024 [MB] (11 MBps) [2024-12-16T12:35:17.530Z] Copying: 383/1024 [MB] (11 MBps) [2024-12-16T12:35:18.474Z] Copying: 394/1024 [MB] (11 MBps) [2024-12-16T12:35:19.418Z] Copying: 406/1024 [MB] (11 MBps) [2024-12-16T12:35:20.364Z] Copying: 418/1024 [MB] (11 MBps) [2024-12-16T12:35:21.307Z] Copying: 429/1024 [MB] (11 MBps) [2024-12-16T12:35:22.251Z] Copying: 440/1024 [MB] (11 MBps) [2024-12-16T12:35:23.196Z] Copying: 452/1024 [MB] (11 MBps) [2024-12-16T12:35:24.140Z] Copying: 464/1024 [MB] (11 MBps) [2024-12-16T12:35:25.527Z] Copying: 475/1024 [MB] (11 MBps) [2024-12-16T12:35:26.471Z] Copying: 486/1024 [MB] (10 MBps) [2024-12-16T12:35:27.417Z] Copying: 497/1024 [MB] (11 MBps) [2024-12-16T12:35:28.421Z] Copying: 509/1024 [MB] (11 MBps) [2024-12-16T12:35:29.371Z] Copying: 520/1024 [MB] (11 MBps) [2024-12-16T12:35:30.316Z] Copying: 532/1024 [MB] (11 MBps) [2024-12-16T12:35:31.260Z] Copying: 543/1024 [MB] (11 MBps) [2024-12-16T12:35:32.205Z] Copying: 555/1024 [MB] (11 MBps) [2024-12-16T12:35:33.150Z] Copying: 566/1024 [MB] (11 MBps) [2024-12-16T12:35:34.539Z] Copying: 578/1024 [MB] (11 MBps) [2024-12-16T12:35:35.484Z] Copying: 590/1024 [MB] (11 MBps) [2024-12-16T12:35:36.428Z] Copying: 601/1024 [MB] (11 MBps) [2024-12-16T12:35:37.372Z] Copying: 613/1024 [MB] (11 MBps) [2024-12-16T12:35:38.317Z] Copying: 624/1024 [MB] (11 MBps) [2024-12-16T12:35:39.261Z] Copying: 645/1024 [MB] (20 MBps) [2024-12-16T12:35:40.204Z] Copying: 657/1024 [MB] (11 MBps) [2024-12-16T12:35:41.148Z] Copying: 668/1024 [MB] (11 MBps) [2024-12-16T12:35:42.535Z] Copying: 680/1024 [MB] (11 MBps) [2024-12-16T12:35:43.478Z] Copying: 691/1024 [MB] (11 MBps) [2024-12-16T12:35:44.421Z] Copying: 703/1024 [MB] (11 MBps) [2024-12-16T12:35:45.365Z] Copying: 714/1024 [MB] (11 MBps) [2024-12-16T12:35:46.308Z] Copying: 725/1024 [MB] (10 MBps) [2024-12-16T12:35:47.250Z] Copying: 736/1024 [MB] (11 MBps) [2024-12-16T12:35:48.194Z] Copying: 748/1024 [MB] (11 MBps) [2024-12-16T12:35:49.137Z] Copying: 760/1024 [MB] (11 MBps) [2024-12-16T12:35:50.525Z] Copying: 772/1024 [MB] (11 MBps) [2024-12-16T12:35:51.469Z] Copying: 783/1024 [MB] (11 MBps) [2024-12-16T12:35:52.414Z] Copying: 795/1024 [MB] (11 MBps) [2024-12-16T12:35:53.359Z] Copying: 806/1024 [MB] (11 MBps) [2024-12-16T12:35:54.305Z] Copying: 818/1024 [MB] (11 MBps) [2024-12-16T12:35:55.253Z] Copying: 829/1024 [MB] (10 MBps) [2024-12-16T12:35:56.199Z] Copying: 841/1024 [MB] (12 MBps) [2024-12-16T12:35:57.191Z] Copying: 853/1024 [MB] (11 MBps) [2024-12-16T12:35:58.147Z] Copying: 864/1024 [MB] (11 MBps) [2024-12-16T12:35:59.534Z] Copying: 875/1024 [MB] (11 MBps) [2024-12-16T12:36:00.479Z] Copying: 887/1024 [MB] (11 MBps) [2024-12-16T12:36:01.423Z] Copying: 899/1024 [MB] (11 MBps) [2024-12-16T12:36:02.365Z] Copying: 909/1024 [MB] (10 MBps) [2024-12-16T12:36:03.309Z] Copying: 920/1024 [MB] (11 MBps) [2024-12-16T12:36:04.253Z] Copying: 933/1024 [MB] (12 MBps) [2024-12-16T12:36:05.196Z] Copying: 947/1024 [MB] (13 MBps) [2024-12-16T12:36:06.138Z] Copying: 958/1024 [MB] (10 MBps) [2024-12-16T12:36:07.526Z] Copying: 968/1024 [MB] (10 MBps) [2024-12-16T12:36:08.469Z] Copying: 983/1024 [MB] (14 MBps) [2024-12-16T12:36:09.414Z] Copying: 993/1024 [MB] (10 MBps) [2024-12-16T12:36:10.357Z] Copying: 1004/1024 [MB] (10 MBps) [2024-12-16T12:36:10.930Z] Copying: 1016/1024 [MB] (11 MBps) [2024-12-16T12:36:11.191Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-12-16 12:36:11.080448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.085 [2024-12-16 12:36:11.080911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:04.085 [2024-12-16 12:36:11.081080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:04.085 [2024-12-16 12:36:11.081129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.085 [2024-12-16 12:36:11.081251] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:04.085 [2024-12-16 12:36:11.089121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.085 [2024-12-16 12:36:11.089812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:04.085 [2024-12-16 12:36:11.089983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.112 ms 00:24:04.085 [2024-12-16 12:36:11.090001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.085 [2024-12-16 12:36:11.090334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.085 [2024-12-16 12:36:11.090351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:04.085 [2024-12-16 12:36:11.090363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:24:04.085 [2024-12-16 12:36:11.090373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.085 [2024-12-16 12:36:11.093866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.085 [2024-12-16 12:36:11.094030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:04.085 [2024-12-16 12:36:11.094048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.478 ms 00:24:04.085 [2024-12-16 12:36:11.094066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.085 [2024-12-16 12:36:11.100208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.085 [2024-12-16 12:36:11.100401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:04.085 [2024-12-16 12:36:11.100423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.114 ms 00:24:04.085 [2024-12-16 12:36:11.100432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.085 [2024-12-16 12:36:11.129421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.085 [2024-12-16 12:36:11.129475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:04.085 [2024-12-16 12:36:11.129489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.924 ms 00:24:04.085 [2024-12-16 12:36:11.129498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.085 [2024-12-16 12:36:11.146643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.085 [2024-12-16 12:36:11.146694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:04.085 [2024-12-16 12:36:11.146709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.091 ms 00:24:04.085 [2024-12-16 12:36:11.146718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.085 [2024-12-16 12:36:11.146893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.085 [2024-12-16 12:36:11.146909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:04.085 [2024-12-16 12:36:11.146920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:24:04.085 [2024-12-16 12:36:11.146929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.085 [2024-12-16 12:36:11.174052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.085 [2024-12-16 12:36:11.174102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:04.085 [2024-12-16 12:36:11.174114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.106 ms 00:24:04.085 [2024-12-16 12:36:11.174122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.347 [2024-12-16 12:36:11.200765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.347 [2024-12-16 12:36:11.200828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:04.347 [2024-12-16 12:36:11.200842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.574 ms 00:24:04.348 [2024-12-16 12:36:11.200851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.348 [2024-12-16 12:36:11.226848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.348 [2024-12-16 12:36:11.226899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:04.348 [2024-12-16 12:36:11.226912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.944 ms 00:24:04.348 [2024-12-16 12:36:11.226920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.348 [2024-12-16 12:36:11.253197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.348 [2024-12-16 12:36:11.253244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:04.348 [2024-12-16 12:36:11.253256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.177 ms 00:24:04.348 [2024-12-16 12:36:11.253264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.348 [2024-12-16 12:36:11.253316] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:04.348 [2024-12-16 12:36:11.253342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.253994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.254001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.254008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.254015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:04.348 [2024-12-16 12:36:11.254023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:04.349 [2024-12-16 12:36:11.254031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:04.349 [2024-12-16 12:36:11.254039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:04.349 [2024-12-16 12:36:11.254046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:04.349 [2024-12-16 12:36:11.254056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:04.349 [2024-12-16 12:36:11.254065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:04.349 [2024-12-16 12:36:11.254072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:04.349 [2024-12-16 12:36:11.254080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:04.349 [2024-12-16 12:36:11.254088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:04.349 [2024-12-16 12:36:11.254095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:04.349 [2024-12-16 12:36:11.254103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:04.349 [2024-12-16 12:36:11.254111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:04.349 [2024-12-16 12:36:11.254121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:04.349 [2024-12-16 12:36:11.254130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:04.349 [2024-12-16 12:36:11.254138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:04.349 [2024-12-16 12:36:11.254146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:04.349 [2024-12-16 12:36:11.254153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:04.349 [2024-12-16 12:36:11.254183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:04.349 [2024-12-16 12:36:11.254200] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:04.349 [2024-12-16 12:36:11.254210] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c24b070e-f700-43d0-839f-ca33a3409e22 00:24:04.349 [2024-12-16 12:36:11.254218] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:04.349 [2024-12-16 12:36:11.254228] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:04.349 [2024-12-16 12:36:11.254236] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:04.349 [2024-12-16 12:36:11.254246] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:04.349 [2024-12-16 12:36:11.254262] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:04.349 [2024-12-16 12:36:11.254270] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:04.349 [2024-12-16 12:36:11.254279] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:04.349 [2024-12-16 12:36:11.254285] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:04.349 [2024-12-16 12:36:11.254294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:04.349 [2024-12-16 12:36:11.254303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.349 [2024-12-16 12:36:11.254312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:04.349 [2024-12-16 12:36:11.254322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:24:04.349 [2024-12-16 12:36:11.254335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.349 [2024-12-16 12:36:11.268988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.349 [2024-12-16 12:36:11.269210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:04.349 [2024-12-16 12:36:11.269232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.614 ms 00:24:04.349 [2024-12-16 12:36:11.269241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.349 [2024-12-16 12:36:11.269702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.349 [2024-12-16 12:36:11.269720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:04.349 [2024-12-16 12:36:11.269739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:24:04.349 [2024-12-16 12:36:11.269748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.349 [2024-12-16 12:36:11.309972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.349 [2024-12-16 12:36:11.310025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:04.349 [2024-12-16 12:36:11.310038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.349 [2024-12-16 12:36:11.310049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.349 [2024-12-16 12:36:11.310116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.349 [2024-12-16 12:36:11.310126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:04.349 [2024-12-16 12:36:11.310144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.349 [2024-12-16 12:36:11.310182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.349 [2024-12-16 12:36:11.310288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.349 [2024-12-16 12:36:11.310302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:04.349 [2024-12-16 12:36:11.310311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.349 [2024-12-16 12:36:11.310321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.349 [2024-12-16 12:36:11.310340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.349 [2024-12-16 12:36:11.310348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:04.349 [2024-12-16 12:36:11.310357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.349 [2024-12-16 12:36:11.310369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.349 [2024-12-16 12:36:11.404538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.349 [2024-12-16 12:36:11.404818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:04.349 [2024-12-16 12:36:11.404841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.349 [2024-12-16 12:36:11.404851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.610 [2024-12-16 12:36:11.480942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.610 [2024-12-16 12:36:11.481004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:04.610 [2024-12-16 12:36:11.481028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.610 [2024-12-16 12:36:11.481036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.610 [2024-12-16 12:36:11.481113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.610 [2024-12-16 12:36:11.481124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:04.610 [2024-12-16 12:36:11.481134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.610 [2024-12-16 12:36:11.481143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.610 [2024-12-16 12:36:11.481245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.610 [2024-12-16 12:36:11.481259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:04.610 [2024-12-16 12:36:11.481270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.610 [2024-12-16 12:36:11.481279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.610 [2024-12-16 12:36:11.481416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.610 [2024-12-16 12:36:11.481430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:04.610 [2024-12-16 12:36:11.481441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.610 [2024-12-16 12:36:11.481450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.610 [2024-12-16 12:36:11.481488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.610 [2024-12-16 12:36:11.481498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:04.610 [2024-12-16 12:36:11.481511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.610 [2024-12-16 12:36:11.481520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.610 [2024-12-16 12:36:11.481579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.610 [2024-12-16 12:36:11.481592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:04.610 [2024-12-16 12:36:11.481601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.610 [2024-12-16 12:36:11.481611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.610 [2024-12-16 12:36:11.481675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.610 [2024-12-16 12:36:11.481690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:04.610 [2024-12-16 12:36:11.481699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.610 [2024-12-16 12:36:11.481708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.610 [2024-12-16 12:36:11.481879] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 401.411 ms, result 0 00:24:05.552 00:24:05.552 00:24:05.552 12:36:12 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:07.466 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:07.466 12:36:14 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:24:07.725 [2024-12-16 12:36:14.609512] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:07.725 [2024-12-16 12:36:14.609609] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81170 ] 00:24:07.725 [2024-12-16 12:36:14.766436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.986 [2024-12-16 12:36:14.896296] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.247 [2024-12-16 12:36:15.195844] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:08.247 [2024-12-16 12:36:15.195948] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:08.509 [2024-12-16 12:36:15.360436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.509 [2024-12-16 12:36:15.360522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:08.509 [2024-12-16 12:36:15.360541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:08.509 [2024-12-16 12:36:15.360550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.509 [2024-12-16 12:36:15.360614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.509 [2024-12-16 12:36:15.360628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:08.509 [2024-12-16 12:36:15.360638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:08.509 [2024-12-16 12:36:15.360647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.509 [2024-12-16 12:36:15.360670] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:08.509 [2024-12-16 12:36:15.361423] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:08.509 [2024-12-16 12:36:15.361449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.509 [2024-12-16 12:36:15.361459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:08.509 [2024-12-16 12:36:15.361470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:24:08.509 [2024-12-16 12:36:15.361478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.509 [2024-12-16 12:36:15.363787] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:08.509 [2024-12-16 12:36:15.379515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.509 [2024-12-16 12:36:15.379571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:08.509 [2024-12-16 12:36:15.379586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.729 ms 00:24:08.509 [2024-12-16 12:36:15.379596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.509 [2024-12-16 12:36:15.379693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.509 [2024-12-16 12:36:15.379705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:08.509 [2024-12-16 12:36:15.379714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:08.509 [2024-12-16 12:36:15.379723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.509 [2024-12-16 12:36:15.391601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.509 [2024-12-16 12:36:15.391650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:08.510 [2024-12-16 12:36:15.391662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.792 ms 00:24:08.510 [2024-12-16 12:36:15.391678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.510 [2024-12-16 12:36:15.391768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.510 [2024-12-16 12:36:15.391778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:08.510 [2024-12-16 12:36:15.391786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:08.510 [2024-12-16 12:36:15.391795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.510 [2024-12-16 12:36:15.391856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.510 [2024-12-16 12:36:15.391868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:08.510 [2024-12-16 12:36:15.391879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:08.510 [2024-12-16 12:36:15.391889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.510 [2024-12-16 12:36:15.391917] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:08.510 [2024-12-16 12:36:15.396638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.510 [2024-12-16 12:36:15.396684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:08.510 [2024-12-16 12:36:15.396699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.728 ms 00:24:08.510 [2024-12-16 12:36:15.396708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.510 [2024-12-16 12:36:15.396751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.510 [2024-12-16 12:36:15.396761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:08.510 [2024-12-16 12:36:15.396770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:08.510 [2024-12-16 12:36:15.396778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.510 [2024-12-16 12:36:15.396819] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:08.510 [2024-12-16 12:36:15.396848] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:08.510 [2024-12-16 12:36:15.396890] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:08.510 [2024-12-16 12:36:15.396912] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:08.510 [2024-12-16 12:36:15.397025] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:08.510 [2024-12-16 12:36:15.397037] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:08.510 [2024-12-16 12:36:15.397050] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:08.510 [2024-12-16 12:36:15.397062] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:08.510 [2024-12-16 12:36:15.397072] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:08.510 [2024-12-16 12:36:15.397082] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:08.510 [2024-12-16 12:36:15.397091] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:08.510 [2024-12-16 12:36:15.397100] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:08.510 [2024-12-16 12:36:15.397112] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:08.510 [2024-12-16 12:36:15.397121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.510 [2024-12-16 12:36:15.397131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:08.510 [2024-12-16 12:36:15.397141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:24:08.510 [2024-12-16 12:36:15.397149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.510 [2024-12-16 12:36:15.397271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.510 [2024-12-16 12:36:15.397285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:08.510 [2024-12-16 12:36:15.397293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:24:08.510 [2024-12-16 12:36:15.397301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.510 [2024-12-16 12:36:15.397427] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:08.510 [2024-12-16 12:36:15.397442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:08.510 [2024-12-16 12:36:15.397452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:08.510 [2024-12-16 12:36:15.397461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.510 [2024-12-16 12:36:15.397470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:08.510 [2024-12-16 12:36:15.397480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:08.510 [2024-12-16 12:36:15.397489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:08.510 [2024-12-16 12:36:15.397496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:08.510 [2024-12-16 12:36:15.397503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:08.510 [2024-12-16 12:36:15.397510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:08.510 [2024-12-16 12:36:15.397517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:08.510 [2024-12-16 12:36:15.397527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:08.510 [2024-12-16 12:36:15.397534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:08.510 [2024-12-16 12:36:15.397553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:08.510 [2024-12-16 12:36:15.397562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:08.510 [2024-12-16 12:36:15.397569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.510 [2024-12-16 12:36:15.397577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:08.510 [2024-12-16 12:36:15.397585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:08.510 [2024-12-16 12:36:15.397592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.510 [2024-12-16 12:36:15.397599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:08.510 [2024-12-16 12:36:15.397606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:08.510 [2024-12-16 12:36:15.397614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.510 [2024-12-16 12:36:15.397622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:08.510 [2024-12-16 12:36:15.397629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:08.510 [2024-12-16 12:36:15.397635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.510 [2024-12-16 12:36:15.397642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:08.510 [2024-12-16 12:36:15.397649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:08.510 [2024-12-16 12:36:15.397655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.510 [2024-12-16 12:36:15.397662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:08.510 [2024-12-16 12:36:15.397669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:08.510 [2024-12-16 12:36:15.397675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.510 [2024-12-16 12:36:15.397682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:08.510 [2024-12-16 12:36:15.397692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:08.510 [2024-12-16 12:36:15.397699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:08.510 [2024-12-16 12:36:15.397706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:08.510 [2024-12-16 12:36:15.397712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:08.510 [2024-12-16 12:36:15.397719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:08.510 [2024-12-16 12:36:15.397725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:08.510 [2024-12-16 12:36:15.397732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:08.510 [2024-12-16 12:36:15.397738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.510 [2024-12-16 12:36:15.397747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:08.510 [2024-12-16 12:36:15.397754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:08.510 [2024-12-16 12:36:15.397764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.510 [2024-12-16 12:36:15.397774] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:08.510 [2024-12-16 12:36:15.397784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:08.510 [2024-12-16 12:36:15.397793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:08.510 [2024-12-16 12:36:15.397803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.510 [2024-12-16 12:36:15.397812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:08.510 [2024-12-16 12:36:15.397819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:08.510 [2024-12-16 12:36:15.397826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:08.510 [2024-12-16 12:36:15.397833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:08.510 [2024-12-16 12:36:15.397840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:08.510 [2024-12-16 12:36:15.397846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:08.510 [2024-12-16 12:36:15.397856] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:08.510 [2024-12-16 12:36:15.397866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:08.510 [2024-12-16 12:36:15.397879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:08.510 [2024-12-16 12:36:15.397887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:08.510 [2024-12-16 12:36:15.397894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:08.510 [2024-12-16 12:36:15.397901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:08.510 [2024-12-16 12:36:15.397908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:08.510 [2024-12-16 12:36:15.397915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:08.510 [2024-12-16 12:36:15.397922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:08.510 [2024-12-16 12:36:15.397931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:08.510 [2024-12-16 12:36:15.397937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:08.510 [2024-12-16 12:36:15.397944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:08.510 [2024-12-16 12:36:15.397951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:08.510 [2024-12-16 12:36:15.397960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:08.510 [2024-12-16 12:36:15.397969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:08.510 [2024-12-16 12:36:15.397978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:08.510 [2024-12-16 12:36:15.397986] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:08.510 [2024-12-16 12:36:15.397995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:08.510 [2024-12-16 12:36:15.398004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:08.510 [2024-12-16 12:36:15.398012] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:08.510 [2024-12-16 12:36:15.398020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:08.510 [2024-12-16 12:36:15.398028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:08.510 [2024-12-16 12:36:15.398040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.510 [2024-12-16 12:36:15.398050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:08.510 [2024-12-16 12:36:15.398059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:24:08.510 [2024-12-16 12:36:15.398067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.510 [2024-12-16 12:36:15.436911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.510 [2024-12-16 12:36:15.436970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:08.510 [2024-12-16 12:36:15.436985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.793 ms 00:24:08.510 [2024-12-16 12:36:15.436999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.510 [2024-12-16 12:36:15.437093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.510 [2024-12-16 12:36:15.437103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:08.510 [2024-12-16 12:36:15.437113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:08.510 [2024-12-16 12:36:15.437121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.511 [2024-12-16 12:36:15.494352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.511 [2024-12-16 12:36:15.494413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:08.511 [2024-12-16 12:36:15.494427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.134 ms 00:24:08.511 [2024-12-16 12:36:15.494437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.511 [2024-12-16 12:36:15.494494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.511 [2024-12-16 12:36:15.494507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:08.511 [2024-12-16 12:36:15.494521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:08.511 [2024-12-16 12:36:15.494529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.511 [2024-12-16 12:36:15.495350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.511 [2024-12-16 12:36:15.495391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:08.511 [2024-12-16 12:36:15.495403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:24:08.511 [2024-12-16 12:36:15.495414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.511 [2024-12-16 12:36:15.495607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.511 [2024-12-16 12:36:15.495619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:08.511 [2024-12-16 12:36:15.495633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:24:08.511 [2024-12-16 12:36:15.495642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.511 [2024-12-16 12:36:15.514063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.511 [2024-12-16 12:36:15.514425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:08.511 [2024-12-16 12:36:15.514449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.396 ms 00:24:08.511 [2024-12-16 12:36:15.514459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.511 [2024-12-16 12:36:15.530191] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:08.511 [2024-12-16 12:36:15.530243] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:08.511 [2024-12-16 12:36:15.530258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.511 [2024-12-16 12:36:15.530269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:08.511 [2024-12-16 12:36:15.530279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.666 ms 00:24:08.511 [2024-12-16 12:36:15.530287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.511 [2024-12-16 12:36:15.556859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.511 [2024-12-16 12:36:15.556914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:08.511 [2024-12-16 12:36:15.556927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.509 ms 00:24:08.511 [2024-12-16 12:36:15.556935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.511 [2024-12-16 12:36:15.570516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.511 [2024-12-16 12:36:15.570587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:08.511 [2024-12-16 12:36:15.570601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.495 ms 00:24:08.511 [2024-12-16 12:36:15.570609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.511 [2024-12-16 12:36:15.583724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.511 [2024-12-16 12:36:15.583774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:08.511 [2024-12-16 12:36:15.583788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.061 ms 00:24:08.511 [2024-12-16 12:36:15.583797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.511 [2024-12-16 12:36:15.584528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.511 [2024-12-16 12:36:15.584558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:08.511 [2024-12-16 12:36:15.584573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:24:08.511 [2024-12-16 12:36:15.584582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.772 [2024-12-16 12:36:15.659517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.772 [2024-12-16 12:36:15.659804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:08.772 [2024-12-16 12:36:15.659838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.912 ms 00:24:08.772 [2024-12-16 12:36:15.659849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.772 [2024-12-16 12:36:15.672380] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:08.772 [2024-12-16 12:36:15.676520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.772 [2024-12-16 12:36:15.676568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:08.772 [2024-12-16 12:36:15.676583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.436 ms 00:24:08.772 [2024-12-16 12:36:15.676592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.772 [2024-12-16 12:36:15.676695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.772 [2024-12-16 12:36:15.676708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:08.772 [2024-12-16 12:36:15.676719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:08.772 [2024-12-16 12:36:15.676733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.773 [2024-12-16 12:36:15.676812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.773 [2024-12-16 12:36:15.676826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:08.773 [2024-12-16 12:36:15.676835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:08.773 [2024-12-16 12:36:15.676844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.773 [2024-12-16 12:36:15.676869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.773 [2024-12-16 12:36:15.676882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:08.773 [2024-12-16 12:36:15.676891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:08.773 [2024-12-16 12:36:15.676900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.773 [2024-12-16 12:36:15.676940] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:08.773 [2024-12-16 12:36:15.676953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.773 [2024-12-16 12:36:15.676962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:08.773 [2024-12-16 12:36:15.676972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:08.773 [2024-12-16 12:36:15.676980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.773 [2024-12-16 12:36:15.703232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.773 [2024-12-16 12:36:15.703286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:08.773 [2024-12-16 12:36:15.703307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.228 ms 00:24:08.773 [2024-12-16 12:36:15.703316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.773 [2024-12-16 12:36:15.703412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.773 [2024-12-16 12:36:15.703424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:08.773 [2024-12-16 12:36:15.703434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:08.773 [2024-12-16 12:36:15.703443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.773 [2024-12-16 12:36:15.705053] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 344.052 ms, result 0 00:24:09.718  [2024-12-16T12:36:17.769Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-16T12:36:19.152Z] Copying: 30/1024 [MB] (16 MBps) [2024-12-16T12:36:19.725Z] Copying: 51/1024 [MB] (21 MBps) [2024-12-16T12:36:21.112Z] Copying: 70/1024 [MB] (18 MBps) [2024-12-16T12:36:21.763Z] Copying: 85/1024 [MB] (15 MBps) [2024-12-16T12:36:23.152Z] Copying: 101/1024 [MB] (15 MBps) [2024-12-16T12:36:23.725Z] Copying: 116/1024 [MB] (15 MBps) [2024-12-16T12:36:25.112Z] Copying: 137/1024 [MB] (20 MBps) [2024-12-16T12:36:26.056Z] Copying: 155/1024 [MB] (18 MBps) [2024-12-16T12:36:26.999Z] Copying: 166/1024 [MB] (11 MBps) [2024-12-16T12:36:27.942Z] Copying: 177/1024 [MB] (11 MBps) [2024-12-16T12:36:28.886Z] Copying: 189/1024 [MB] (11 MBps) [2024-12-16T12:36:29.831Z] Copying: 200/1024 [MB] (11 MBps) [2024-12-16T12:36:30.775Z] Copying: 212/1024 [MB] (11 MBps) [2024-12-16T12:36:31.719Z] Copying: 223/1024 [MB] (10 MBps) [2024-12-16T12:36:33.107Z] Copying: 234/1024 [MB] (11 MBps) [2024-12-16T12:36:34.051Z] Copying: 245/1024 [MB] (11 MBps) [2024-12-16T12:36:34.994Z] Copying: 257/1024 [MB] (11 MBps) [2024-12-16T12:36:35.937Z] Copying: 269/1024 [MB] (11 MBps) [2024-12-16T12:36:36.881Z] Copying: 280/1024 [MB] (11 MBps) [2024-12-16T12:36:37.825Z] Copying: 291/1024 [MB] (11 MBps) [2024-12-16T12:36:38.768Z] Copying: 302/1024 [MB] (10 MBps) [2024-12-16T12:36:40.154Z] Copying: 313/1024 [MB] (11 MBps) [2024-12-16T12:36:40.726Z] Copying: 327/1024 [MB] (13 MBps) [2024-12-16T12:36:42.110Z] Copying: 338/1024 [MB] (10 MBps) [2024-12-16T12:36:43.052Z] Copying: 349/1024 [MB] (11 MBps) [2024-12-16T12:36:43.995Z] Copying: 361/1024 [MB] (11 MBps) [2024-12-16T12:36:44.939Z] Copying: 374/1024 [MB] (13 MBps) [2024-12-16T12:36:45.882Z] Copying: 389/1024 [MB] (15 MBps) [2024-12-16T12:36:46.826Z] Copying: 401/1024 [MB] (11 MBps) [2024-12-16T12:36:47.771Z] Copying: 413/1024 [MB] (11 MBps) [2024-12-16T12:36:48.762Z] Copying: 424/1024 [MB] (11 MBps) [2024-12-16T12:36:50.150Z] Copying: 438/1024 [MB] (13 MBps) [2024-12-16T12:36:50.722Z] Copying: 449/1024 [MB] (11 MBps) [2024-12-16T12:36:52.105Z] Copying: 460/1024 [MB] (10 MBps) [2024-12-16T12:36:53.049Z] Copying: 471/1024 [MB] (11 MBps) [2024-12-16T12:36:53.993Z] Copying: 482/1024 [MB] (10 MBps) [2024-12-16T12:36:54.939Z] Copying: 493/1024 [MB] (10 MBps) [2024-12-16T12:36:55.884Z] Copying: 505/1024 [MB] (11 MBps) [2024-12-16T12:36:56.829Z] Copying: 515/1024 [MB] (10 MBps) [2024-12-16T12:36:57.774Z] Copying: 527/1024 [MB] (11 MBps) [2024-12-16T12:36:59.159Z] Copying: 550208/1048576 [kB] (10188 kBps) [2024-12-16T12:36:59.731Z] Copying: 560364/1048576 [kB] (10156 kBps) [2024-12-16T12:37:01.119Z] Copying: 558/1024 [MB] (11 MBps) [2024-12-16T12:37:02.064Z] Copying: 569/1024 [MB] (10 MBps) [2024-12-16T12:37:03.008Z] Copying: 580/1024 [MB] (11 MBps) [2024-12-16T12:37:03.952Z] Copying: 591/1024 [MB] (11 MBps) [2024-12-16T12:37:04.896Z] Copying: 603/1024 [MB] (11 MBps) [2024-12-16T12:37:05.840Z] Copying: 614/1024 [MB] (11 MBps) [2024-12-16T12:37:06.783Z] Copying: 625/1024 [MB] (11 MBps) [2024-12-16T12:37:07.727Z] Copying: 637/1024 [MB] (11 MBps) [2024-12-16T12:37:09.114Z] Copying: 648/1024 [MB] (11 MBps) [2024-12-16T12:37:10.056Z] Copying: 660/1024 [MB] (11 MBps) [2024-12-16T12:37:11.001Z] Copying: 671/1024 [MB] (11 MBps) [2024-12-16T12:37:11.944Z] Copying: 682/1024 [MB] (11 MBps) [2024-12-16T12:37:12.886Z] Copying: 693/1024 [MB] (11 MBps) [2024-12-16T12:37:13.892Z] Copying: 705/1024 [MB] (11 MBps) [2024-12-16T12:37:14.842Z] Copying: 716/1024 [MB] (11 MBps) [2024-12-16T12:37:15.787Z] Copying: 727/1024 [MB] (11 MBps) [2024-12-16T12:37:16.731Z] Copying: 738/1024 [MB] (11 MBps) [2024-12-16T12:37:18.118Z] Copying: 749/1024 [MB] (10 MBps) [2024-12-16T12:37:19.063Z] Copying: 760/1024 [MB] (11 MBps) [2024-12-16T12:37:20.007Z] Copying: 772/1024 [MB] (11 MBps) [2024-12-16T12:37:20.956Z] Copying: 783/1024 [MB] (11 MBps) [2024-12-16T12:37:21.902Z] Copying: 794/1024 [MB] (10 MBps) [2024-12-16T12:37:22.846Z] Copying: 804/1024 [MB] (10 MBps) [2024-12-16T12:37:23.792Z] Copying: 816/1024 [MB] (11 MBps) [2024-12-16T12:37:24.736Z] Copying: 826/1024 [MB] (10 MBps) [2024-12-16T12:37:26.121Z] Copying: 837/1024 [MB] (10 MBps) [2024-12-16T12:37:27.065Z] Copying: 849/1024 [MB] (11 MBps) [2024-12-16T12:37:28.009Z] Copying: 862/1024 [MB] (12 MBps) [2024-12-16T12:37:28.953Z] Copying: 873/1024 [MB] (11 MBps) [2024-12-16T12:37:29.898Z] Copying: 884/1024 [MB] (11 MBps) [2024-12-16T12:37:30.843Z] Copying: 895/1024 [MB] (11 MBps) [2024-12-16T12:37:31.786Z] Copying: 906/1024 [MB] (11 MBps) [2024-12-16T12:37:32.731Z] Copying: 917/1024 [MB] (10 MBps) [2024-12-16T12:37:34.119Z] Copying: 929/1024 [MB] (11 MBps) [2024-12-16T12:37:35.062Z] Copying: 941/1024 [MB] (11 MBps) [2024-12-16T12:37:36.009Z] Copying: 952/1024 [MB] (11 MBps) [2024-12-16T12:37:36.953Z] Copying: 965/1024 [MB] (12 MBps) [2024-12-16T12:37:37.897Z] Copying: 977/1024 [MB] (11 MBps) [2024-12-16T12:37:38.842Z] Copying: 989/1024 [MB] (11 MBps) [2024-12-16T12:37:39.848Z] Copying: 999/1024 [MB] (10 MBps) [2024-12-16T12:37:40.795Z] Copying: 1009/1024 [MB] (10 MBps) [2024-12-16T12:37:41.740Z] Copying: 1019/1024 [MB] (10 MBps) [2024-12-16T12:37:42.314Z] Copying: 1048228/1048576 [kB] (3808 kBps) [2024-12-16T12:37:42.314Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-12-16 12:37:42.064301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.208 [2024-12-16 12:37:42.064475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:35.208 [2024-12-16 12:37:42.064538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:35.208 [2024-12-16 12:37:42.064559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.208 [2024-12-16 12:37:42.066010] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:35.208 [2024-12-16 12:37:42.070334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.208 [2024-12-16 12:37:42.070443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:35.208 [2024-12-16 12:37:42.070497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.201 ms 00:25:35.208 [2024-12-16 12:37:42.070516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.208 [2024-12-16 12:37:42.080925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.208 [2024-12-16 12:37:42.081026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:35.208 [2024-12-16 12:37:42.081039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.359 ms 00:25:35.208 [2024-12-16 12:37:42.081053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.208 [2024-12-16 12:37:42.101613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.208 [2024-12-16 12:37:42.101644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:35.208 [2024-12-16 12:37:42.101653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.545 ms 00:25:35.208 [2024-12-16 12:37:42.101659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.208 [2024-12-16 12:37:42.106337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.208 [2024-12-16 12:37:42.106361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:35.208 [2024-12-16 12:37:42.106370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.655 ms 00:25:35.208 [2024-12-16 12:37:42.106381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.208 [2024-12-16 12:37:42.125871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.208 [2024-12-16 12:37:42.125898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:35.209 [2024-12-16 12:37:42.125906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.456 ms 00:25:35.209 [2024-12-16 12:37:42.125912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.209 [2024-12-16 12:37:42.138485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.209 [2024-12-16 12:37:42.138510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:35.209 [2024-12-16 12:37:42.138519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.547 ms 00:25:35.209 [2024-12-16 12:37:42.138526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.472 [2024-12-16 12:37:42.376417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.472 [2024-12-16 12:37:42.376445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:35.472 [2024-12-16 12:37:42.376454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 237.837 ms 00:25:35.472 [2024-12-16 12:37:42.376459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.472 [2024-12-16 12:37:42.394994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.472 [2024-12-16 12:37:42.395019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:35.472 [2024-12-16 12:37:42.395026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.524 ms 00:25:35.472 [2024-12-16 12:37:42.395032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.472 [2024-12-16 12:37:42.413270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.472 [2024-12-16 12:37:42.413380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:35.472 [2024-12-16 12:37:42.413392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.212 ms 00:25:35.472 [2024-12-16 12:37:42.413398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.472 [2024-12-16 12:37:42.430814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.472 [2024-12-16 12:37:42.430837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:35.472 [2024-12-16 12:37:42.430845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.394 ms 00:25:35.472 [2024-12-16 12:37:42.430850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.472 [2024-12-16 12:37:42.448433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.472 [2024-12-16 12:37:42.448578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:35.472 [2024-12-16 12:37:42.448590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.527 ms 00:25:35.472 [2024-12-16 12:37:42.448596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.472 [2024-12-16 12:37:42.448617] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:35.472 [2024-12-16 12:37:42.448627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 101376 / 261120 wr_cnt: 1 state: open 00:25:35.472 [2024-12-16 12:37:42.448636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:35.472 [2024-12-16 12:37:42.448828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.448995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:35.473 [2024-12-16 12:37:42.449226] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:35.473 [2024-12-16 12:37:42.449231] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c24b070e-f700-43d0-839f-ca33a3409e22 00:25:35.473 [2024-12-16 12:37:42.449238] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 101376 00:25:35.473 [2024-12-16 12:37:42.449244] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 102336 00:25:35.473 [2024-12-16 12:37:42.449250] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 101376 00:25:35.473 [2024-12-16 12:37:42.449256] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0095 00:25:35.473 [2024-12-16 12:37:42.449270] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:35.473 [2024-12-16 12:37:42.449277] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:35.473 [2024-12-16 12:37:42.449282] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:35.473 [2024-12-16 12:37:42.449287] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:35.473 [2024-12-16 12:37:42.449292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:35.473 [2024-12-16 12:37:42.449297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.473 [2024-12-16 12:37:42.449304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:35.473 [2024-12-16 12:37:42.449310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:25:35.473 [2024-12-16 12:37:42.449315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.473 [2024-12-16 12:37:42.459439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.473 [2024-12-16 12:37:42.459542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:35.473 [2024-12-16 12:37:42.459557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.111 ms 00:25:35.473 [2024-12-16 12:37:42.459563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.473 [2024-12-16 12:37:42.459848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.473 [2024-12-16 12:37:42.459860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:35.473 [2024-12-16 12:37:42.459867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:25:35.473 [2024-12-16 12:37:42.459873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.473 [2024-12-16 12:37:42.487364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.473 [2024-12-16 12:37:42.487390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:35.473 [2024-12-16 12:37:42.487398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.473 [2024-12-16 12:37:42.487405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.473 [2024-12-16 12:37:42.487444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.473 [2024-12-16 12:37:42.487451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:35.473 [2024-12-16 12:37:42.487458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.473 [2024-12-16 12:37:42.487464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.473 [2024-12-16 12:37:42.487507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.473 [2024-12-16 12:37:42.487515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:35.473 [2024-12-16 12:37:42.487525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.474 [2024-12-16 12:37:42.487531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.474 [2024-12-16 12:37:42.487543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.474 [2024-12-16 12:37:42.487550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:35.474 [2024-12-16 12:37:42.487555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.474 [2024-12-16 12:37:42.487561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.474 [2024-12-16 12:37:42.551989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.474 [2024-12-16 12:37:42.552121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:35.474 [2024-12-16 12:37:42.552188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.474 [2024-12-16 12:37:42.552207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.735 [2024-12-16 12:37:42.603875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.735 [2024-12-16 12:37:42.604001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:35.735 [2024-12-16 12:37:42.604040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.735 [2024-12-16 12:37:42.604057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.735 [2024-12-16 12:37:42.604135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.735 [2024-12-16 12:37:42.604154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:35.735 [2024-12-16 12:37:42.604184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.735 [2024-12-16 12:37:42.604205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.735 [2024-12-16 12:37:42.604244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.735 [2024-12-16 12:37:42.604263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:35.735 [2024-12-16 12:37:42.604281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.735 [2024-12-16 12:37:42.604320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.735 [2024-12-16 12:37:42.604411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.735 [2024-12-16 12:37:42.604432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:35.735 [2024-12-16 12:37:42.604704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.735 [2024-12-16 12:37:42.604729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.735 [2024-12-16 12:37:42.604772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.735 [2024-12-16 12:37:42.604791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:35.735 [2024-12-16 12:37:42.604829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.735 [2024-12-16 12:37:42.604846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.735 [2024-12-16 12:37:42.604890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.735 [2024-12-16 12:37:42.604909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:35.735 [2024-12-16 12:37:42.604924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.735 [2024-12-16 12:37:42.604939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.735 [2024-12-16 12:37:42.604985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.735 [2024-12-16 12:37:42.604994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:35.735 [2024-12-16 12:37:42.605001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.735 [2024-12-16 12:37:42.605007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.735 [2024-12-16 12:37:42.605119] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 541.912 ms, result 0 00:25:36.680 00:25:36.680 00:25:36.680 12:37:43 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:25:36.680 [2024-12-16 12:37:43.766104] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:25:36.680 [2024-12-16 12:37:43.766243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82089 ] 00:25:36.940 [2024-12-16 12:37:43.921132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.941 [2024-12-16 12:37:44.005967] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.201 [2024-12-16 12:37:44.237726] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:37.201 [2024-12-16 12:37:44.237788] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:37.463 [2024-12-16 12:37:44.393893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.463 [2024-12-16 12:37:44.393934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:37.463 [2024-12-16 12:37:44.393945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:37.463 [2024-12-16 12:37:44.393952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.463 [2024-12-16 12:37:44.393993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.463 [2024-12-16 12:37:44.394004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:37.463 [2024-12-16 12:37:44.394011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:37.463 [2024-12-16 12:37:44.394017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.463 [2024-12-16 12:37:44.394031] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:37.463 [2024-12-16 12:37:44.394554] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:37.463 [2024-12-16 12:37:44.394572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.463 [2024-12-16 12:37:44.394578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:37.463 [2024-12-16 12:37:44.394585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:25:37.463 [2024-12-16 12:37:44.394591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.463 [2024-12-16 12:37:44.395825] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:37.463 [2024-12-16 12:37:44.406408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.463 [2024-12-16 12:37:44.406436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:37.463 [2024-12-16 12:37:44.406445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.584 ms 00:25:37.463 [2024-12-16 12:37:44.406451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.463 [2024-12-16 12:37:44.406501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.463 [2024-12-16 12:37:44.406509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:37.463 [2024-12-16 12:37:44.406516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:37.463 [2024-12-16 12:37:44.406522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.463 [2024-12-16 12:37:44.412816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.463 [2024-12-16 12:37:44.412982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:37.463 [2024-12-16 12:37:44.412995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.254 ms 00:25:37.463 [2024-12-16 12:37:44.413005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.463 [2024-12-16 12:37:44.413066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.463 [2024-12-16 12:37:44.413074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:37.463 [2024-12-16 12:37:44.413081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:25:37.463 [2024-12-16 12:37:44.413087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.463 [2024-12-16 12:37:44.413125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.463 [2024-12-16 12:37:44.413133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:37.463 [2024-12-16 12:37:44.413140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:37.463 [2024-12-16 12:37:44.413146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.463 [2024-12-16 12:37:44.413184] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:37.463 [2024-12-16 12:37:44.416106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.464 [2024-12-16 12:37:44.416219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:37.464 [2024-12-16 12:37:44.416235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.927 ms 00:25:37.464 [2024-12-16 12:37:44.416242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.464 [2024-12-16 12:37:44.416274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.464 [2024-12-16 12:37:44.416282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:37.464 [2024-12-16 12:37:44.416289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:37.464 [2024-12-16 12:37:44.416295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.464 [2024-12-16 12:37:44.416310] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:37.464 [2024-12-16 12:37:44.416329] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:37.464 [2024-12-16 12:37:44.416359] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:37.464 [2024-12-16 12:37:44.416375] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:37.464 [2024-12-16 12:37:44.416460] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:37.464 [2024-12-16 12:37:44.416469] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:37.464 [2024-12-16 12:37:44.416477] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:37.464 [2024-12-16 12:37:44.416486] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:37.464 [2024-12-16 12:37:44.416493] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:37.464 [2024-12-16 12:37:44.416500] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:37.464 [2024-12-16 12:37:44.416506] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:37.464 [2024-12-16 12:37:44.416512] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:37.464 [2024-12-16 12:37:44.416520] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:37.464 [2024-12-16 12:37:44.416526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.464 [2024-12-16 12:37:44.416533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:37.464 [2024-12-16 12:37:44.416539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:25:37.464 [2024-12-16 12:37:44.416544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.464 [2024-12-16 12:37:44.416608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.464 [2024-12-16 12:37:44.416618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:37.464 [2024-12-16 12:37:44.416625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:37.464 [2024-12-16 12:37:44.416630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.464 [2024-12-16 12:37:44.416706] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:37.464 [2024-12-16 12:37:44.416715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:37.464 [2024-12-16 12:37:44.416721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:37.464 [2024-12-16 12:37:44.416727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.464 [2024-12-16 12:37:44.416734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:37.464 [2024-12-16 12:37:44.416739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:37.464 [2024-12-16 12:37:44.416745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:37.464 [2024-12-16 12:37:44.416750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:37.464 [2024-12-16 12:37:44.416757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:37.464 [2024-12-16 12:37:44.416762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:37.464 [2024-12-16 12:37:44.416768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:37.464 [2024-12-16 12:37:44.416773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:37.464 [2024-12-16 12:37:44.416779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:37.464 [2024-12-16 12:37:44.416789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:37.464 [2024-12-16 12:37:44.416796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:37.464 [2024-12-16 12:37:44.416801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.464 [2024-12-16 12:37:44.416806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:37.464 [2024-12-16 12:37:44.416814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:37.464 [2024-12-16 12:37:44.416820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.464 [2024-12-16 12:37:44.416825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:37.464 [2024-12-16 12:37:44.416830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:37.464 [2024-12-16 12:37:44.416835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:37.464 [2024-12-16 12:37:44.416840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:37.464 [2024-12-16 12:37:44.416845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:37.464 [2024-12-16 12:37:44.416850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:37.464 [2024-12-16 12:37:44.416855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:37.464 [2024-12-16 12:37:44.416860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:37.464 [2024-12-16 12:37:44.416865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:37.464 [2024-12-16 12:37:44.416870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:37.464 [2024-12-16 12:37:44.416876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:37.464 [2024-12-16 12:37:44.416881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:37.464 [2024-12-16 12:37:44.416886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:37.464 [2024-12-16 12:37:44.416891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:37.464 [2024-12-16 12:37:44.416896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:37.464 [2024-12-16 12:37:44.416901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:37.464 [2024-12-16 12:37:44.416906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:37.464 [2024-12-16 12:37:44.416912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:37.464 [2024-12-16 12:37:44.416917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:37.464 [2024-12-16 12:37:44.416922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:37.464 [2024-12-16 12:37:44.416927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.464 [2024-12-16 12:37:44.416933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:37.464 [2024-12-16 12:37:44.416938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:37.464 [2024-12-16 12:37:44.416944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.464 [2024-12-16 12:37:44.416949] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:37.464 [2024-12-16 12:37:44.416955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:37.464 [2024-12-16 12:37:44.416961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:37.464 [2024-12-16 12:37:44.416967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.464 [2024-12-16 12:37:44.416972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:37.464 [2024-12-16 12:37:44.416977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:37.464 [2024-12-16 12:37:44.416983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:37.464 [2024-12-16 12:37:44.416989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:37.464 [2024-12-16 12:37:44.416995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:37.464 [2024-12-16 12:37:44.417000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:37.464 [2024-12-16 12:37:44.417007] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:37.464 [2024-12-16 12:37:44.417014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:37.464 [2024-12-16 12:37:44.417022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:37.464 [2024-12-16 12:37:44.417028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:37.464 [2024-12-16 12:37:44.417034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:37.464 [2024-12-16 12:37:44.417040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:37.464 [2024-12-16 12:37:44.417045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:37.464 [2024-12-16 12:37:44.417050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:37.464 [2024-12-16 12:37:44.417056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:37.464 [2024-12-16 12:37:44.417061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:37.464 [2024-12-16 12:37:44.417067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:37.464 [2024-12-16 12:37:44.417073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:37.464 [2024-12-16 12:37:44.417078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:37.464 [2024-12-16 12:37:44.417083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:37.464 [2024-12-16 12:37:44.417088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:37.464 [2024-12-16 12:37:44.417094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:37.464 [2024-12-16 12:37:44.417099] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:37.464 [2024-12-16 12:37:44.417105] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:37.464 [2024-12-16 12:37:44.417112] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:37.464 [2024-12-16 12:37:44.417117] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:37.464 [2024-12-16 12:37:44.417123] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:37.464 [2024-12-16 12:37:44.417128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:37.464 [2024-12-16 12:37:44.417134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.464 [2024-12-16 12:37:44.417140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:37.464 [2024-12-16 12:37:44.417146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:25:37.464 [2024-12-16 12:37:44.417151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.464 [2024-12-16 12:37:44.441388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.464 [2024-12-16 12:37:44.441416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:37.464 [2024-12-16 12:37:44.441427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.182 ms 00:25:37.465 [2024-12-16 12:37:44.441436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.465 [2024-12-16 12:37:44.441505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.465 [2024-12-16 12:37:44.441513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:37.465 [2024-12-16 12:37:44.441519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:37.465 [2024-12-16 12:37:44.441525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.465 [2024-12-16 12:37:44.482542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.465 [2024-12-16 12:37:44.482573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:37.465 [2024-12-16 12:37:44.482583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.975 ms 00:25:37.465 [2024-12-16 12:37:44.482590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.465 [2024-12-16 12:37:44.482624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.465 [2024-12-16 12:37:44.482633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:37.465 [2024-12-16 12:37:44.482643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:37.465 [2024-12-16 12:37:44.482649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.465 [2024-12-16 12:37:44.483063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.465 [2024-12-16 12:37:44.483084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:37.465 [2024-12-16 12:37:44.483092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:25:37.465 [2024-12-16 12:37:44.483097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.465 [2024-12-16 12:37:44.483219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.465 [2024-12-16 12:37:44.483232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:37.465 [2024-12-16 12:37:44.483239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:25:37.465 [2024-12-16 12:37:44.483249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.465 [2024-12-16 12:37:44.495034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.465 [2024-12-16 12:37:44.495061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:37.465 [2024-12-16 12:37:44.495071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.770 ms 00:25:37.465 [2024-12-16 12:37:44.495077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.465 [2024-12-16 12:37:44.505925] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:37.465 [2024-12-16 12:37:44.506055] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:37.465 [2024-12-16 12:37:44.506068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.465 [2024-12-16 12:37:44.506076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:37.465 [2024-12-16 12:37:44.506084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.891 ms 00:25:37.465 [2024-12-16 12:37:44.506090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.465 [2024-12-16 12:37:44.525225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.465 [2024-12-16 12:37:44.525259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:37.465 [2024-12-16 12:37:44.525270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.107 ms 00:25:37.465 [2024-12-16 12:37:44.525277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.465 [2024-12-16 12:37:44.534630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.465 [2024-12-16 12:37:44.534657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:37.465 [2024-12-16 12:37:44.534665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.315 ms 00:25:37.465 [2024-12-16 12:37:44.534671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.465 [2024-12-16 12:37:44.543593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.465 [2024-12-16 12:37:44.543618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:37.465 [2024-12-16 12:37:44.543626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.894 ms 00:25:37.465 [2024-12-16 12:37:44.543632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.465 [2024-12-16 12:37:44.544095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.465 [2024-12-16 12:37:44.544115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:37.465 [2024-12-16 12:37:44.544125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:25:37.465 [2024-12-16 12:37:44.544131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.725 [2024-12-16 12:37:44.592323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.725 [2024-12-16 12:37:44.592369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:37.725 [2024-12-16 12:37:44.592383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.180 ms 00:25:37.725 [2024-12-16 12:37:44.592389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.725 [2024-12-16 12:37:44.600332] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:37.725 [2024-12-16 12:37:44.602467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.725 [2024-12-16 12:37:44.602491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:37.725 [2024-12-16 12:37:44.602500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.046 ms 00:25:37.725 [2024-12-16 12:37:44.602507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.725 [2024-12-16 12:37:44.602556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.725 [2024-12-16 12:37:44.602566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:37.725 [2024-12-16 12:37:44.602573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:37.725 [2024-12-16 12:37:44.602582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.725 [2024-12-16 12:37:44.603859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.725 [2024-12-16 12:37:44.603977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:37.725 [2024-12-16 12:37:44.603991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.232 ms 00:25:37.725 [2024-12-16 12:37:44.603997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.725 [2024-12-16 12:37:44.604018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.725 [2024-12-16 12:37:44.604025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:37.725 [2024-12-16 12:37:44.604032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:37.725 [2024-12-16 12:37:44.604039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.725 [2024-12-16 12:37:44.604073] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:37.725 [2024-12-16 12:37:44.604082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.725 [2024-12-16 12:37:44.604089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:37.725 [2024-12-16 12:37:44.604095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:37.726 [2024-12-16 12:37:44.604102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.726 [2024-12-16 12:37:44.622610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.726 [2024-12-16 12:37:44.622710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:37.726 [2024-12-16 12:37:44.622727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.494 ms 00:25:37.726 [2024-12-16 12:37:44.622734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.726 [2024-12-16 12:37:44.622785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.726 [2024-12-16 12:37:44.622794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:37.726 [2024-12-16 12:37:44.622801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:37.726 [2024-12-16 12:37:44.622807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.726 [2024-12-16 12:37:44.623687] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 229.426 ms, result 0 00:25:38.668  [2024-12-16T12:37:47.161Z] Copying: 9728/1048576 [kB] (9728 kBps) [2024-12-16T12:37:48.106Z] Copying: 21/1024 [MB] (11 MBps) [2024-12-16T12:37:49.040Z] Copying: 32/1024 [MB] (11 MBps) [2024-12-16T12:37:49.972Z] Copying: 62/1024 [MB] (29 MBps) [2024-12-16T12:37:50.904Z] Copying: 109/1024 [MB] (47 MBps) [2024-12-16T12:37:51.837Z] Copying: 158/1024 [MB] (49 MBps) [2024-12-16T12:37:52.770Z] Copying: 208/1024 [MB] (49 MBps) [2024-12-16T12:37:54.142Z] Copying: 256/1024 [MB] (48 MBps) [2024-12-16T12:37:55.075Z] Copying: 304/1024 [MB] (47 MBps) [2024-12-16T12:37:56.012Z] Copying: 352/1024 [MB] (47 MBps) [2024-12-16T12:37:56.945Z] Copying: 398/1024 [MB] (46 MBps) [2024-12-16T12:37:57.878Z] Copying: 446/1024 [MB] (47 MBps) [2024-12-16T12:37:58.810Z] Copying: 495/1024 [MB] (49 MBps) [2024-12-16T12:38:00.183Z] Copying: 547/1024 [MB] (51 MBps) [2024-12-16T12:38:01.122Z] Copying: 596/1024 [MB] (49 MBps) [2024-12-16T12:38:02.062Z] Copying: 650/1024 [MB] (53 MBps) [2024-12-16T12:38:03.007Z] Copying: 671/1024 [MB] (21 MBps) [2024-12-16T12:38:03.951Z] Copying: 695/1024 [MB] (23 MBps) [2024-12-16T12:38:04.975Z] Copying: 718/1024 [MB] (23 MBps) [2024-12-16T12:38:05.930Z] Copying: 732/1024 [MB] (13 MBps) [2024-12-16T12:38:06.873Z] Copying: 746/1024 [MB] (14 MBps) [2024-12-16T12:38:07.817Z] Copying: 757/1024 [MB] (11 MBps) [2024-12-16T12:38:09.203Z] Copying: 768/1024 [MB] (10 MBps) [2024-12-16T12:38:09.776Z] Copying: 783/1024 [MB] (14 MBps) [2024-12-16T12:38:11.162Z] Copying: 801/1024 [MB] (17 MBps) [2024-12-16T12:38:12.104Z] Copying: 815/1024 [MB] (14 MBps) [2024-12-16T12:38:13.048Z] Copying: 830/1024 [MB] (15 MBps) [2024-12-16T12:38:13.991Z] Copying: 853/1024 [MB] (22 MBps) [2024-12-16T12:38:14.934Z] Copying: 868/1024 [MB] (14 MBps) [2024-12-16T12:38:15.886Z] Copying: 886/1024 [MB] (18 MBps) [2024-12-16T12:38:16.829Z] Copying: 907/1024 [MB] (20 MBps) [2024-12-16T12:38:17.773Z] Copying: 933/1024 [MB] (25 MBps) [2024-12-16T12:38:19.161Z] Copying: 945/1024 [MB] (12 MBps) [2024-12-16T12:38:20.106Z] Copying: 957/1024 [MB] (11 MBps) [2024-12-16T12:38:21.051Z] Copying: 968/1024 [MB] (11 MBps) [2024-12-16T12:38:21.995Z] Copying: 979/1024 [MB] (11 MBps) [2024-12-16T12:38:22.940Z] Copying: 994/1024 [MB] (14 MBps) [2024-12-16T12:38:23.884Z] Copying: 1006/1024 [MB] (11 MBps) [2024-12-16T12:38:24.457Z] Copying: 1018/1024 [MB] (11 MBps) [2024-12-16T12:38:24.457Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-16 12:38:24.371235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.351 [2024-12-16 12:38:24.371414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:17.351 [2024-12-16 12:38:24.371475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:17.351 [2024-12-16 12:38:24.371503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.351 [2024-12-16 12:38:24.371537] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:17.351 [2024-12-16 12:38:24.373781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.351 [2024-12-16 12:38:24.373881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:17.351 [2024-12-16 12:38:24.374077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.213 ms 00:26:17.351 [2024-12-16 12:38:24.374095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.351 [2024-12-16 12:38:24.374284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.351 [2024-12-16 12:38:24.374305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:17.351 [2024-12-16 12:38:24.374321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:26:17.351 [2024-12-16 12:38:24.374340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.351 [2024-12-16 12:38:24.378663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.351 [2024-12-16 12:38:24.378760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:17.351 [2024-12-16 12:38:24.378805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.302 ms 00:26:17.351 [2024-12-16 12:38:24.378822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.351 [2024-12-16 12:38:24.383547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.351 [2024-12-16 12:38:24.383666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:17.351 [2024-12-16 12:38:24.383717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.538 ms 00:26:17.351 [2024-12-16 12:38:24.383742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.351 [2024-12-16 12:38:24.406452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.351 [2024-12-16 12:38:24.406563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:17.351 [2024-12-16 12:38:24.406609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.654 ms 00:26:17.351 [2024-12-16 12:38:24.406626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.351 [2024-12-16 12:38:24.419398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.351 [2024-12-16 12:38:24.419497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:17.351 [2024-12-16 12:38:24.419539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.686 ms 00:26:17.351 [2024-12-16 12:38:24.419556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.926 [2024-12-16 12:38:24.747057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.926 [2024-12-16 12:38:24.747175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:17.926 [2024-12-16 12:38:24.747218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 327.464 ms 00:26:17.926 [2024-12-16 12:38:24.747237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.926 [2024-12-16 12:38:24.766020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.926 [2024-12-16 12:38:24.766119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:17.926 [2024-12-16 12:38:24.766168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.758 ms 00:26:17.926 [2024-12-16 12:38:24.766186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.926 [2024-12-16 12:38:24.783912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.926 [2024-12-16 12:38:24.784009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:17.926 [2024-12-16 12:38:24.784052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.695 ms 00:26:17.926 [2024-12-16 12:38:24.784069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.926 [2024-12-16 12:38:24.801283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.926 [2024-12-16 12:38:24.801380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:17.926 [2024-12-16 12:38:24.801424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.183 ms 00:26:17.926 [2024-12-16 12:38:24.801440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.926 [2024-12-16 12:38:24.818885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.926 [2024-12-16 12:38:24.818978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:17.926 [2024-12-16 12:38:24.819020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.395 ms 00:26:17.926 [2024-12-16 12:38:24.819036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.926 [2024-12-16 12:38:24.819066] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:17.926 [2024-12-16 12:38:24.819086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:26:17.926 [2024-12-16 12:38:24.819111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:17.926 [2024-12-16 12:38:24.819767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.819995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:17.927 [2024-12-16 12:38:24.820147] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:17.927 [2024-12-16 12:38:24.820154] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c24b070e-f700-43d0-839f-ca33a3409e22 00:26:17.927 [2024-12-16 12:38:24.820174] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:26:17.927 [2024-12-16 12:38:24.820180] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 30656 00:26:17.927 [2024-12-16 12:38:24.820186] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 29696 00:26:17.927 [2024-12-16 12:38:24.820192] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0323 00:26:17.927 [2024-12-16 12:38:24.820201] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:17.927 [2024-12-16 12:38:24.820212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:17.927 [2024-12-16 12:38:24.820219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:17.927 [2024-12-16 12:38:24.820223] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:17.927 [2024-12-16 12:38:24.820229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:17.927 [2024-12-16 12:38:24.820235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.927 [2024-12-16 12:38:24.820241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:17.927 [2024-12-16 12:38:24.820247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.170 ms 00:26:17.927 [2024-12-16 12:38:24.820254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.927 [2024-12-16 12:38:24.830132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.927 [2024-12-16 12:38:24.830166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:17.927 [2024-12-16 12:38:24.830179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.863 ms 00:26:17.927 [2024-12-16 12:38:24.830185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.927 [2024-12-16 12:38:24.830475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.927 [2024-12-16 12:38:24.830484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:17.927 [2024-12-16 12:38:24.830490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:26:17.927 [2024-12-16 12:38:24.830496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.927 [2024-12-16 12:38:24.857925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.927 [2024-12-16 12:38:24.857956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:17.927 [2024-12-16 12:38:24.857964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.927 [2024-12-16 12:38:24.857970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.927 [2024-12-16 12:38:24.858011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.927 [2024-12-16 12:38:24.858018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:17.927 [2024-12-16 12:38:24.858024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.927 [2024-12-16 12:38:24.858030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.927 [2024-12-16 12:38:24.858074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.927 [2024-12-16 12:38:24.858082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:17.927 [2024-12-16 12:38:24.858092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.927 [2024-12-16 12:38:24.858097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.928 [2024-12-16 12:38:24.858110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.928 [2024-12-16 12:38:24.858117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:17.928 [2024-12-16 12:38:24.858123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.928 [2024-12-16 12:38:24.858129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.928 [2024-12-16 12:38:24.921457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.928 [2024-12-16 12:38:24.921496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:17.928 [2024-12-16 12:38:24.921505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.928 [2024-12-16 12:38:24.921512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.928 [2024-12-16 12:38:24.972924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.928 [2024-12-16 12:38:24.973115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:17.928 [2024-12-16 12:38:24.973129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.928 [2024-12-16 12:38:24.973136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.928 [2024-12-16 12:38:24.973220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.928 [2024-12-16 12:38:24.973229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:17.928 [2024-12-16 12:38:24.973236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.928 [2024-12-16 12:38:24.973246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.928 [2024-12-16 12:38:24.973277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.928 [2024-12-16 12:38:24.973285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:17.928 [2024-12-16 12:38:24.973293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.928 [2024-12-16 12:38:24.973300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.928 [2024-12-16 12:38:24.973390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.928 [2024-12-16 12:38:24.973399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:17.928 [2024-12-16 12:38:24.973405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.928 [2024-12-16 12:38:24.973411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.928 [2024-12-16 12:38:24.973441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.928 [2024-12-16 12:38:24.973449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:17.928 [2024-12-16 12:38:24.973456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.928 [2024-12-16 12:38:24.973462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.928 [2024-12-16 12:38:24.973495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.928 [2024-12-16 12:38:24.973504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:17.928 [2024-12-16 12:38:24.973511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.928 [2024-12-16 12:38:24.973517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.928 [2024-12-16 12:38:24.973560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.928 [2024-12-16 12:38:24.973568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:17.928 [2024-12-16 12:38:24.973575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.928 [2024-12-16 12:38:24.973581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.928 [2024-12-16 12:38:24.973691] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 602.429 ms, result 0 00:26:18.503 00:26:18.503 00:26:18.503 12:38:25 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:21.048 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:21.048 12:38:27 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:21.048 12:38:27 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:26:21.048 12:38:27 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:21.048 12:38:27 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:21.048 12:38:27 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:21.048 12:38:27 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79098 00:26:21.048 12:38:27 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79098 ']' 00:26:21.048 12:38:27 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79098 00:26:21.048 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79098) - No such process 00:26:21.048 Process with pid 79098 is not found 00:26:21.048 12:38:27 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79098 is not found' 00:26:21.048 Remove shared memory files 00:26:21.048 12:38:27 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:26:21.048 12:38:27 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:21.048 12:38:27 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:26:21.048 12:38:27 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:26:21.048 12:38:27 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:26:21.048 12:38:27 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:21.048 12:38:27 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:26:21.048 00:26:21.048 real 5m32.728s 00:26:21.048 user 5m21.555s 00:26:21.048 sys 0m11.306s 00:26:21.048 ************************************ 00:26:21.048 END TEST ftl_restore 00:26:21.048 ************************************ 00:26:21.048 12:38:27 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:21.048 12:38:27 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:26:21.048 12:38:27 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:21.048 12:38:27 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:21.048 12:38:27 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:21.048 12:38:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:21.048 ************************************ 00:26:21.048 START TEST ftl_dirty_shutdown 00:26:21.048 ************************************ 00:26:21.048 12:38:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:21.048 * Looking for test storage... 00:26:21.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:21.048 12:38:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:21.048 12:38:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:21.048 12:38:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:26:21.048 12:38:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:21.048 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:21.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.049 --rc genhtml_branch_coverage=1 00:26:21.049 --rc genhtml_function_coverage=1 00:26:21.049 --rc genhtml_legend=1 00:26:21.049 --rc geninfo_all_blocks=1 00:26:21.049 --rc geninfo_unexecuted_blocks=1 00:26:21.049 00:26:21.049 ' 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:21.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.049 --rc genhtml_branch_coverage=1 00:26:21.049 --rc genhtml_function_coverage=1 00:26:21.049 --rc genhtml_legend=1 00:26:21.049 --rc geninfo_all_blocks=1 00:26:21.049 --rc geninfo_unexecuted_blocks=1 00:26:21.049 00:26:21.049 ' 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:21.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.049 --rc genhtml_branch_coverage=1 00:26:21.049 --rc genhtml_function_coverage=1 00:26:21.049 --rc genhtml_legend=1 00:26:21.049 --rc geninfo_all_blocks=1 00:26:21.049 --rc geninfo_unexecuted_blocks=1 00:26:21.049 00:26:21.049 ' 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:21.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.049 --rc genhtml_branch_coverage=1 00:26:21.049 --rc genhtml_function_coverage=1 00:26:21.049 --rc genhtml_legend=1 00:26:21.049 --rc geninfo_all_blocks=1 00:26:21.049 --rc geninfo_unexecuted_blocks=1 00:26:21.049 00:26:21.049 ' 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=82645 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 82645 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82645 ']' 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:21.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:21.049 12:38:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:21.049 [2024-12-16 12:38:28.052028] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:26:21.049 [2024-12-16 12:38:28.052731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82645 ] 00:26:21.310 [2024-12-16 12:38:28.206482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.310 [2024-12-16 12:38:28.297175] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.881 12:38:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:21.881 12:38:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:21.881 12:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:21.881 12:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:26:21.881 12:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:21.881 12:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:26:21.881 12:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:21.881 12:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:22.142 12:38:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:22.142 12:38:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:22.142 12:38:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:22.142 12:38:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:22.142 12:38:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:22.142 12:38:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:22.142 12:38:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:22.142 12:38:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:22.403 12:38:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:22.403 { 00:26:22.403 "name": "nvme0n1", 00:26:22.403 "aliases": [ 00:26:22.403 "0e6099ce-0028-4cec-83c8-6794f4e16aa6" 00:26:22.403 ], 00:26:22.403 "product_name": "NVMe disk", 00:26:22.403 "block_size": 4096, 00:26:22.403 "num_blocks": 1310720, 00:26:22.403 "uuid": "0e6099ce-0028-4cec-83c8-6794f4e16aa6", 00:26:22.403 "numa_id": -1, 00:26:22.403 "assigned_rate_limits": { 00:26:22.403 "rw_ios_per_sec": 0, 00:26:22.403 "rw_mbytes_per_sec": 0, 00:26:22.403 "r_mbytes_per_sec": 0, 00:26:22.403 "w_mbytes_per_sec": 0 00:26:22.403 }, 00:26:22.403 "claimed": true, 00:26:22.403 "claim_type": "read_many_write_one", 00:26:22.403 "zoned": false, 00:26:22.403 "supported_io_types": { 00:26:22.403 "read": true, 00:26:22.403 "write": true, 00:26:22.403 "unmap": true, 00:26:22.403 "flush": true, 00:26:22.403 "reset": true, 00:26:22.403 "nvme_admin": true, 00:26:22.403 "nvme_io": true, 00:26:22.403 "nvme_io_md": false, 00:26:22.403 "write_zeroes": true, 00:26:22.403 "zcopy": false, 00:26:22.403 "get_zone_info": false, 00:26:22.403 "zone_management": false, 00:26:22.403 "zone_append": false, 00:26:22.403 "compare": true, 00:26:22.403 "compare_and_write": false, 00:26:22.403 "abort": true, 00:26:22.403 "seek_hole": false, 00:26:22.403 "seek_data": false, 00:26:22.403 "copy": true, 00:26:22.403 "nvme_iov_md": false 00:26:22.403 }, 00:26:22.403 "driver_specific": { 00:26:22.403 "nvme": [ 00:26:22.403 { 00:26:22.403 "pci_address": "0000:00:11.0", 00:26:22.403 "trid": { 00:26:22.403 "trtype": "PCIe", 00:26:22.403 "traddr": "0000:00:11.0" 00:26:22.403 }, 00:26:22.403 "ctrlr_data": { 00:26:22.403 "cntlid": 0, 00:26:22.403 "vendor_id": "0x1b36", 00:26:22.403 "model_number": "QEMU NVMe Ctrl", 00:26:22.403 "serial_number": "12341", 00:26:22.403 "firmware_revision": "8.0.0", 00:26:22.403 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:22.403 "oacs": { 00:26:22.403 "security": 0, 00:26:22.403 "format": 1, 00:26:22.403 "firmware": 0, 00:26:22.403 "ns_manage": 1 00:26:22.403 }, 00:26:22.403 "multi_ctrlr": false, 00:26:22.403 "ana_reporting": false 00:26:22.403 }, 00:26:22.403 "vs": { 00:26:22.403 "nvme_version": "1.4" 00:26:22.403 }, 00:26:22.403 "ns_data": { 00:26:22.403 "id": 1, 00:26:22.403 "can_share": false 00:26:22.403 } 00:26:22.403 } 00:26:22.403 ], 00:26:22.403 "mp_policy": "active_passive" 00:26:22.403 } 00:26:22.403 } 00:26:22.403 ]' 00:26:22.403 12:38:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:22.403 12:38:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:22.403 12:38:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:22.403 12:38:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:22.403 12:38:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:22.403 12:38:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:26:22.403 12:38:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:22.403 12:38:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:22.403 12:38:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:22.403 12:38:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:22.403 12:38:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:22.664 12:38:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=d6b030e9-b8fb-4cac-9c2b-a35c27036257 00:26:22.664 12:38:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:22.664 12:38:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d6b030e9-b8fb-4cac-9c2b-a35c27036257 00:26:22.925 12:38:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:23.225 12:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=1d564964-6b7e-40bc-846f-494c07e063a6 00:26:23.225 12:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1d564964-6b7e-40bc-846f-494c07e063a6 00:26:23.225 12:38:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=a12ffa57-ea1f-4109-a2eb-98c4b5931ede 00:26:23.225 12:38:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:26:23.225 12:38:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a12ffa57-ea1f-4109-a2eb-98c4b5931ede 00:26:23.225 12:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:26:23.225 12:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:23.225 12:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=a12ffa57-ea1f-4109-a2eb-98c4b5931ede 00:26:23.225 12:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:26:23.225 12:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size a12ffa57-ea1f-4109-a2eb-98c4b5931ede 00:26:23.225 12:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=a12ffa57-ea1f-4109-a2eb-98c4b5931ede 00:26:23.225 12:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:23.225 12:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:23.225 12:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:23.225 12:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a12ffa57-ea1f-4109-a2eb-98c4b5931ede 00:26:23.538 12:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:23.538 { 00:26:23.538 "name": "a12ffa57-ea1f-4109-a2eb-98c4b5931ede", 00:26:23.538 "aliases": [ 00:26:23.538 "lvs/nvme0n1p0" 00:26:23.538 ], 00:26:23.538 "product_name": "Logical Volume", 00:26:23.538 "block_size": 4096, 00:26:23.538 "num_blocks": 26476544, 00:26:23.538 "uuid": "a12ffa57-ea1f-4109-a2eb-98c4b5931ede", 00:26:23.538 "assigned_rate_limits": { 00:26:23.538 "rw_ios_per_sec": 0, 00:26:23.538 "rw_mbytes_per_sec": 0, 00:26:23.538 "r_mbytes_per_sec": 0, 00:26:23.538 "w_mbytes_per_sec": 0 00:26:23.538 }, 00:26:23.538 "claimed": false, 00:26:23.538 "zoned": false, 00:26:23.538 "supported_io_types": { 00:26:23.538 "read": true, 00:26:23.538 "write": true, 00:26:23.538 "unmap": true, 00:26:23.538 "flush": false, 00:26:23.538 "reset": true, 00:26:23.538 "nvme_admin": false, 00:26:23.538 "nvme_io": false, 00:26:23.538 "nvme_io_md": false, 00:26:23.538 "write_zeroes": true, 00:26:23.538 "zcopy": false, 00:26:23.538 "get_zone_info": false, 00:26:23.538 "zone_management": false, 00:26:23.538 "zone_append": false, 00:26:23.538 "compare": false, 00:26:23.538 "compare_and_write": false, 00:26:23.538 "abort": false, 00:26:23.538 "seek_hole": true, 00:26:23.538 "seek_data": true, 00:26:23.538 "copy": false, 00:26:23.538 "nvme_iov_md": false 00:26:23.538 }, 00:26:23.538 "driver_specific": { 00:26:23.538 "lvol": { 00:26:23.538 "lvol_store_uuid": "1d564964-6b7e-40bc-846f-494c07e063a6", 00:26:23.538 "base_bdev": "nvme0n1", 00:26:23.538 "thin_provision": true, 00:26:23.538 "num_allocated_clusters": 0, 00:26:23.538 "snapshot": false, 00:26:23.538 "clone": false, 00:26:23.538 "esnap_clone": false 00:26:23.538 } 00:26:23.538 } 00:26:23.538 } 00:26:23.538 ]' 00:26:23.538 12:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:23.538 12:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:23.538 12:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:23.538 12:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:23.538 12:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:23.539 12:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:23.539 12:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:26:23.539 12:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:23.539 12:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:23.800 12:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:23.800 12:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:23.800 12:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size a12ffa57-ea1f-4109-a2eb-98c4b5931ede 00:26:23.800 12:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=a12ffa57-ea1f-4109-a2eb-98c4b5931ede 00:26:23.800 12:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:23.800 12:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:23.800 12:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:23.800 12:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a12ffa57-ea1f-4109-a2eb-98c4b5931ede 00:26:24.059 12:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:24.059 { 00:26:24.059 "name": "a12ffa57-ea1f-4109-a2eb-98c4b5931ede", 00:26:24.059 "aliases": [ 00:26:24.059 "lvs/nvme0n1p0" 00:26:24.059 ], 00:26:24.059 "product_name": "Logical Volume", 00:26:24.059 "block_size": 4096, 00:26:24.059 "num_blocks": 26476544, 00:26:24.059 "uuid": "a12ffa57-ea1f-4109-a2eb-98c4b5931ede", 00:26:24.059 "assigned_rate_limits": { 00:26:24.059 "rw_ios_per_sec": 0, 00:26:24.059 "rw_mbytes_per_sec": 0, 00:26:24.059 "r_mbytes_per_sec": 0, 00:26:24.059 "w_mbytes_per_sec": 0 00:26:24.059 }, 00:26:24.059 "claimed": false, 00:26:24.059 "zoned": false, 00:26:24.059 "supported_io_types": { 00:26:24.059 "read": true, 00:26:24.059 "write": true, 00:26:24.059 "unmap": true, 00:26:24.059 "flush": false, 00:26:24.059 "reset": true, 00:26:24.059 "nvme_admin": false, 00:26:24.059 "nvme_io": false, 00:26:24.059 "nvme_io_md": false, 00:26:24.059 "write_zeroes": true, 00:26:24.059 "zcopy": false, 00:26:24.059 "get_zone_info": false, 00:26:24.059 "zone_management": false, 00:26:24.059 "zone_append": false, 00:26:24.059 "compare": false, 00:26:24.059 "compare_and_write": false, 00:26:24.059 "abort": false, 00:26:24.059 "seek_hole": true, 00:26:24.059 "seek_data": true, 00:26:24.059 "copy": false, 00:26:24.059 "nvme_iov_md": false 00:26:24.059 }, 00:26:24.059 "driver_specific": { 00:26:24.059 "lvol": { 00:26:24.059 "lvol_store_uuid": "1d564964-6b7e-40bc-846f-494c07e063a6", 00:26:24.059 "base_bdev": "nvme0n1", 00:26:24.059 "thin_provision": true, 00:26:24.059 "num_allocated_clusters": 0, 00:26:24.059 "snapshot": false, 00:26:24.059 "clone": false, 00:26:24.059 "esnap_clone": false 00:26:24.059 } 00:26:24.059 } 00:26:24.059 } 00:26:24.059 ]' 00:26:24.060 12:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:24.060 12:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:24.060 12:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:24.060 12:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:24.060 12:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:24.060 12:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:24.060 12:38:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:26:24.060 12:38:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:24.320 12:38:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:26:24.320 12:38:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size a12ffa57-ea1f-4109-a2eb-98c4b5931ede 00:26:24.320 12:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=a12ffa57-ea1f-4109-a2eb-98c4b5931ede 00:26:24.320 12:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:24.320 12:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:24.320 12:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:24.320 12:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a12ffa57-ea1f-4109-a2eb-98c4b5931ede 00:26:24.582 12:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:24.582 { 00:26:24.582 "name": "a12ffa57-ea1f-4109-a2eb-98c4b5931ede", 00:26:24.582 "aliases": [ 00:26:24.582 "lvs/nvme0n1p0" 00:26:24.582 ], 00:26:24.582 "product_name": "Logical Volume", 00:26:24.582 "block_size": 4096, 00:26:24.582 "num_blocks": 26476544, 00:26:24.582 "uuid": "a12ffa57-ea1f-4109-a2eb-98c4b5931ede", 00:26:24.582 "assigned_rate_limits": { 00:26:24.582 "rw_ios_per_sec": 0, 00:26:24.582 "rw_mbytes_per_sec": 0, 00:26:24.582 "r_mbytes_per_sec": 0, 00:26:24.582 "w_mbytes_per_sec": 0 00:26:24.582 }, 00:26:24.582 "claimed": false, 00:26:24.582 "zoned": false, 00:26:24.582 "supported_io_types": { 00:26:24.582 "read": true, 00:26:24.582 "write": true, 00:26:24.582 "unmap": true, 00:26:24.582 "flush": false, 00:26:24.582 "reset": true, 00:26:24.582 "nvme_admin": false, 00:26:24.582 "nvme_io": false, 00:26:24.582 "nvme_io_md": false, 00:26:24.582 "write_zeroes": true, 00:26:24.582 "zcopy": false, 00:26:24.582 "get_zone_info": false, 00:26:24.582 "zone_management": false, 00:26:24.582 "zone_append": false, 00:26:24.582 "compare": false, 00:26:24.582 "compare_and_write": false, 00:26:24.582 "abort": false, 00:26:24.582 "seek_hole": true, 00:26:24.582 "seek_data": true, 00:26:24.582 "copy": false, 00:26:24.582 "nvme_iov_md": false 00:26:24.582 }, 00:26:24.582 "driver_specific": { 00:26:24.582 "lvol": { 00:26:24.582 "lvol_store_uuid": "1d564964-6b7e-40bc-846f-494c07e063a6", 00:26:24.582 "base_bdev": "nvme0n1", 00:26:24.582 "thin_provision": true, 00:26:24.582 "num_allocated_clusters": 0, 00:26:24.582 "snapshot": false, 00:26:24.582 "clone": false, 00:26:24.582 "esnap_clone": false 00:26:24.582 } 00:26:24.582 } 00:26:24.582 } 00:26:24.582 ]' 00:26:24.582 12:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:24.582 12:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:24.582 12:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:24.582 12:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:24.582 12:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:24.582 12:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:24.582 12:38:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:26:24.582 12:38:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d a12ffa57-ea1f-4109-a2eb-98c4b5931ede --l2p_dram_limit 10' 00:26:24.582 12:38:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:26:24.582 12:38:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:26:24.582 12:38:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:24.582 12:38:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a12ffa57-ea1f-4109-a2eb-98c4b5931ede --l2p_dram_limit 10 -c nvc0n1p0 00:26:24.844 [2024-12-16 12:38:31.708705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.844 [2024-12-16 12:38:31.708750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:24.844 [2024-12-16 12:38:31.708766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:24.844 [2024-12-16 12:38:31.708773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.844 [2024-12-16 12:38:31.708813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.844 [2024-12-16 12:38:31.708821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:24.844 [2024-12-16 12:38:31.708829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:26:24.844 [2024-12-16 12:38:31.708834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.844 [2024-12-16 12:38:31.708854] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:24.844 [2024-12-16 12:38:31.709420] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:24.844 [2024-12-16 12:38:31.709439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.844 [2024-12-16 12:38:31.709446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:24.844 [2024-12-16 12:38:31.709455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:26:24.844 [2024-12-16 12:38:31.709461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.844 [2024-12-16 12:38:31.709486] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 37b5d6ac-e9c9-4bd5-8ad5-f8a6eec59538 00:26:24.844 [2024-12-16 12:38:31.710767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.844 [2024-12-16 12:38:31.710955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:24.844 [2024-12-16 12:38:31.710972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:26:24.844 [2024-12-16 12:38:31.710981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.844 [2024-12-16 12:38:31.717971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.844 [2024-12-16 12:38:31.718076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:24.844 [2024-12-16 12:38:31.718089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.921 ms 00:26:24.844 [2024-12-16 12:38:31.718097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.844 [2024-12-16 12:38:31.718179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.844 [2024-12-16 12:38:31.718188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:24.844 [2024-12-16 12:38:31.718195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:26:24.844 [2024-12-16 12:38:31.718205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.844 [2024-12-16 12:38:31.718246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.844 [2024-12-16 12:38:31.718256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:24.844 [2024-12-16 12:38:31.718262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:24.845 [2024-12-16 12:38:31.718272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.845 [2024-12-16 12:38:31.718289] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:24.845 [2024-12-16 12:38:31.721555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.845 [2024-12-16 12:38:31.721579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:24.845 [2024-12-16 12:38:31.721588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.267 ms 00:26:24.845 [2024-12-16 12:38:31.721594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.845 [2024-12-16 12:38:31.721623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.845 [2024-12-16 12:38:31.721629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:24.845 [2024-12-16 12:38:31.721637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:24.845 [2024-12-16 12:38:31.721643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.845 [2024-12-16 12:38:31.721657] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:24.845 [2024-12-16 12:38:31.721772] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:24.845 [2024-12-16 12:38:31.721785] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:24.845 [2024-12-16 12:38:31.721794] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:24.845 [2024-12-16 12:38:31.721804] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:24.845 [2024-12-16 12:38:31.721811] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:24.845 [2024-12-16 12:38:31.721819] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:24.845 [2024-12-16 12:38:31.721825] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:24.845 [2024-12-16 12:38:31.721836] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:24.845 [2024-12-16 12:38:31.721842] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:24.845 [2024-12-16 12:38:31.721850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.845 [2024-12-16 12:38:31.721861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:24.845 [2024-12-16 12:38:31.721869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:26:24.845 [2024-12-16 12:38:31.721874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.845 [2024-12-16 12:38:31.721940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.845 [2024-12-16 12:38:31.721948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:24.845 [2024-12-16 12:38:31.721955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:24.845 [2024-12-16 12:38:31.721960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.845 [2024-12-16 12:38:31.722036] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:24.845 [2024-12-16 12:38:31.722044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:24.845 [2024-12-16 12:38:31.722052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:24.845 [2024-12-16 12:38:31.722061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:24.845 [2024-12-16 12:38:31.722068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:24.845 [2024-12-16 12:38:31.722073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:24.845 [2024-12-16 12:38:31.722081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:24.845 [2024-12-16 12:38:31.722087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:24.845 [2024-12-16 12:38:31.722095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:24.845 [2024-12-16 12:38:31.722100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:24.845 [2024-12-16 12:38:31.722107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:24.845 [2024-12-16 12:38:31.722115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:24.845 [2024-12-16 12:38:31.722122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:24.845 [2024-12-16 12:38:31.722128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:24.845 [2024-12-16 12:38:31.722135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:24.845 [2024-12-16 12:38:31.722140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:24.845 [2024-12-16 12:38:31.722148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:24.845 [2024-12-16 12:38:31.722163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:24.845 [2024-12-16 12:38:31.722170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:24.845 [2024-12-16 12:38:31.722176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:24.845 [2024-12-16 12:38:31.722182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:24.845 [2024-12-16 12:38:31.722188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:24.845 [2024-12-16 12:38:31.722195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:24.845 [2024-12-16 12:38:31.722200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:24.845 [2024-12-16 12:38:31.722206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:24.845 [2024-12-16 12:38:31.722211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:24.845 [2024-12-16 12:38:31.722218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:24.845 [2024-12-16 12:38:31.722222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:24.845 [2024-12-16 12:38:31.722229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:24.845 [2024-12-16 12:38:31.722234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:24.845 [2024-12-16 12:38:31.722241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:24.845 [2024-12-16 12:38:31.722246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:24.845 [2024-12-16 12:38:31.722254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:24.845 [2024-12-16 12:38:31.722259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:24.845 [2024-12-16 12:38:31.722267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:24.845 [2024-12-16 12:38:31.722272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:24.845 [2024-12-16 12:38:31.722279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:24.845 [2024-12-16 12:38:31.722284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:24.845 [2024-12-16 12:38:31.722291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:24.845 [2024-12-16 12:38:31.722296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:24.845 [2024-12-16 12:38:31.722302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:24.845 [2024-12-16 12:38:31.722307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:24.845 [2024-12-16 12:38:31.722314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:24.845 [2024-12-16 12:38:31.722320] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:24.845 [2024-12-16 12:38:31.722328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:24.845 [2024-12-16 12:38:31.722334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:24.845 [2024-12-16 12:38:31.722341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:24.845 [2024-12-16 12:38:31.722348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:24.845 [2024-12-16 12:38:31.722356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:24.845 [2024-12-16 12:38:31.722361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:24.845 [2024-12-16 12:38:31.722368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:24.845 [2024-12-16 12:38:31.722373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:24.845 [2024-12-16 12:38:31.722379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:24.845 [2024-12-16 12:38:31.722386] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:24.845 [2024-12-16 12:38:31.722394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:24.845 [2024-12-16 12:38:31.722403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:24.845 [2024-12-16 12:38:31.722410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:24.845 [2024-12-16 12:38:31.722415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:24.845 [2024-12-16 12:38:31.722422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:24.845 [2024-12-16 12:38:31.722427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:24.845 [2024-12-16 12:38:31.722436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:24.845 [2024-12-16 12:38:31.722441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:24.845 [2024-12-16 12:38:31.722448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:24.845 [2024-12-16 12:38:31.722453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:24.845 [2024-12-16 12:38:31.722462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:24.845 [2024-12-16 12:38:31.722467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:24.845 [2024-12-16 12:38:31.722474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:24.845 [2024-12-16 12:38:31.722479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:24.845 [2024-12-16 12:38:31.722487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:24.845 [2024-12-16 12:38:31.722493] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:24.845 [2024-12-16 12:38:31.722502] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:24.845 [2024-12-16 12:38:31.722508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:24.845 [2024-12-16 12:38:31.722515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:24.846 [2024-12-16 12:38:31.722520] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:24.846 [2024-12-16 12:38:31.722528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:24.846 [2024-12-16 12:38:31.722535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.846 [2024-12-16 12:38:31.722543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:24.846 [2024-12-16 12:38:31.722549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:26:24.846 [2024-12-16 12:38:31.722556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.846 [2024-12-16 12:38:31.722599] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:24.846 [2024-12-16 12:38:31.722611] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:28.145 [2024-12-16 12:38:34.829202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.145 [2024-12-16 12:38:34.829340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:28.145 [2024-12-16 12:38:34.829368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3106.593 ms 00:26:28.145 [2024-12-16 12:38:34.829377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.145 [2024-12-16 12:38:34.852897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.145 [2024-12-16 12:38:34.853020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:28.145 [2024-12-16 12:38:34.853033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.351 ms 00:26:28.145 [2024-12-16 12:38:34.853041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.145 [2024-12-16 12:38:34.853141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.145 [2024-12-16 12:38:34.853151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:28.145 [2024-12-16 12:38:34.853176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:26:28.145 [2024-12-16 12:38:34.853189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.145 [2024-12-16 12:38:34.879805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.145 [2024-12-16 12:38:34.879915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:28.145 [2024-12-16 12:38:34.879928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.577 ms 00:26:28.145 [2024-12-16 12:38:34.879936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.145 [2024-12-16 12:38:34.879960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.145 [2024-12-16 12:38:34.879972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:28.145 [2024-12-16 12:38:34.879979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:28.145 [2024-12-16 12:38:34.879992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.145 [2024-12-16 12:38:34.880418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.145 [2024-12-16 12:38:34.880435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:28.145 [2024-12-16 12:38:34.880443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:26:28.145 [2024-12-16 12:38:34.880451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.145 [2024-12-16 12:38:34.880533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.145 [2024-12-16 12:38:34.880542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:28.145 [2024-12-16 12:38:34.880551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:26:28.145 [2024-12-16 12:38:34.880560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.145 [2024-12-16 12:38:34.893732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.145 [2024-12-16 12:38:34.893761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:28.145 [2024-12-16 12:38:34.893769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.157 ms 00:26:28.145 [2024-12-16 12:38:34.893776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.145 [2024-12-16 12:38:34.917992] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:28.145 [2024-12-16 12:38:34.921579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.145 [2024-12-16 12:38:34.921613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:28.145 [2024-12-16 12:38:34.921628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.743 ms 00:26:28.145 [2024-12-16 12:38:34.921638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.145 [2024-12-16 12:38:34.998396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.145 [2024-12-16 12:38:34.998426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:28.145 [2024-12-16 12:38:34.998437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.720 ms 00:26:28.145 [2024-12-16 12:38:34.998443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.145 [2024-12-16 12:38:34.998590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.145 [2024-12-16 12:38:34.998600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:28.145 [2024-12-16 12:38:34.998611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:26:28.145 [2024-12-16 12:38:34.998618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.145 [2024-12-16 12:38:35.016880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.145 [2024-12-16 12:38:35.016996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:28.145 [2024-12-16 12:38:35.017013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.227 ms 00:26:28.145 [2024-12-16 12:38:35.017019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.145 [2024-12-16 12:38:35.035191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.145 [2024-12-16 12:38:35.035385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:28.145 [2024-12-16 12:38:35.035401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.141 ms 00:26:28.145 [2024-12-16 12:38:35.035407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.145 [2024-12-16 12:38:35.035852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.145 [2024-12-16 12:38:35.035861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:28.145 [2024-12-16 12:38:35.035869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:26:28.145 [2024-12-16 12:38:35.035876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.145 [2024-12-16 12:38:35.098804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.145 [2024-12-16 12:38:35.098916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:28.145 [2024-12-16 12:38:35.098936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.900 ms 00:26:28.145 [2024-12-16 12:38:35.098943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.145 [2024-12-16 12:38:35.118964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.145 [2024-12-16 12:38:35.118991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:28.145 [2024-12-16 12:38:35.119002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.967 ms 00:26:28.145 [2024-12-16 12:38:35.119008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.145 [2024-12-16 12:38:35.137365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.145 [2024-12-16 12:38:35.137462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:28.145 [2024-12-16 12:38:35.137477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.326 ms 00:26:28.145 [2024-12-16 12:38:35.137483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.145 [2024-12-16 12:38:35.156559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.145 [2024-12-16 12:38:35.156655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:28.145 [2024-12-16 12:38:35.156670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.048 ms 00:26:28.145 [2024-12-16 12:38:35.156676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.145 [2024-12-16 12:38:35.156707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.145 [2024-12-16 12:38:35.156714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:28.145 [2024-12-16 12:38:35.156725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:28.145 [2024-12-16 12:38:35.156731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.145 [2024-12-16 12:38:35.156806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.145 [2024-12-16 12:38:35.156816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:28.145 [2024-12-16 12:38:35.156824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:28.145 [2024-12-16 12:38:35.156830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.145 [2024-12-16 12:38:35.157631] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3448.556 ms, result 0 00:26:28.145 { 00:26:28.145 "name": "ftl0", 00:26:28.145 "uuid": "37b5d6ac-e9c9-4bd5-8ad5-f8a6eec59538" 00:26:28.145 } 00:26:28.145 12:38:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:26:28.145 12:38:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:28.402 12:38:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:26:28.402 12:38:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:26:28.402 12:38:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:26:28.660 /dev/nbd0 00:26:28.660 12:38:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:26:28.660 12:38:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:28.660 12:38:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:26:28.660 12:38:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:28.660 12:38:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:28.660 12:38:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:28.660 12:38:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:26:28.660 12:38:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:28.660 12:38:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:28.660 12:38:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:26:28.660 1+0 records in 00:26:28.660 1+0 records out 00:26:28.660 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242936 s, 16.9 MB/s 00:26:28.660 12:38:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:28.660 12:38:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:26:28.660 12:38:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:28.660 12:38:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:28.660 12:38:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:26:28.661 12:38:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:26:28.661 [2024-12-16 12:38:35.637807] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:26:28.661 [2024-12-16 12:38:35.638363] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82782 ] 00:26:28.921 [2024-12-16 12:38:35.794175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.921 [2024-12-16 12:38:35.890632] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.304  [2024-12-16T12:38:38.350Z] Copying: 195/1024 [MB] (195 MBps) [2024-12-16T12:38:39.285Z] Copying: 392/1024 [MB] (196 MBps) [2024-12-16T12:38:40.219Z] Copying: 629/1024 [MB] (236 MBps) [2024-12-16T12:38:40.785Z] Copying: 883/1024 [MB] (254 MBps) [2024-12-16T12:38:41.352Z] Copying: 1024/1024 [MB] (average 224 MBps) 00:26:34.246 00:26:34.247 12:38:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:36.790 12:38:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:26:36.791 [2024-12-16 12:38:43.400709] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:26:36.791 [2024-12-16 12:38:43.400798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82863 ] 00:26:36.791 [2024-12-16 12:38:43.546977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.791 [2024-12-16 12:38:43.640748] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.179  [2024-12-16T12:38:46.221Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-16T12:38:47.154Z] Copying: 45/1024 [MB] (25 MBps) [2024-12-16T12:38:48.087Z] Copying: 78/1024 [MB] (33 MBps) [2024-12-16T12:38:49.021Z] Copying: 108/1024 [MB] (29 MBps) [2024-12-16T12:38:49.956Z] Copying: 140/1024 [MB] (32 MBps) [2024-12-16T12:38:50.898Z] Copying: 173/1024 [MB] (32 MBps) [2024-12-16T12:38:52.286Z] Copying: 184/1024 [MB] (11 MBps) [2024-12-16T12:38:52.859Z] Copying: 195/1024 [MB] (11 MBps) [2024-12-16T12:38:54.232Z] Copying: 212/1024 [MB] (16 MBps) [2024-12-16T12:38:55.166Z] Copying: 242/1024 [MB] (30 MBps) [2024-12-16T12:38:56.099Z] Copying: 274/1024 [MB] (31 MBps) [2024-12-16T12:38:57.032Z] Copying: 309/1024 [MB] (35 MBps) [2024-12-16T12:38:58.002Z] Copying: 344/1024 [MB] (35 MBps) [2024-12-16T12:38:58.933Z] Copying: 380/1024 [MB] (36 MBps) [2024-12-16T12:38:59.865Z] Copying: 416/1024 [MB] (35 MBps) [2024-12-16T12:39:01.238Z] Copying: 451/1024 [MB] (34 MBps) [2024-12-16T12:39:02.172Z] Copying: 484/1024 [MB] (33 MBps) [2024-12-16T12:39:03.105Z] Copying: 520/1024 [MB] (35 MBps) [2024-12-16T12:39:04.038Z] Copying: 555/1024 [MB] (35 MBps) [2024-12-16T12:39:04.971Z] Copying: 589/1024 [MB] (33 MBps) [2024-12-16T12:39:05.903Z] Copying: 620/1024 [MB] (30 MBps) [2024-12-16T12:39:07.277Z] Copying: 655/1024 [MB] (35 MBps) [2024-12-16T12:39:08.210Z] Copying: 689/1024 [MB] (34 MBps) [2024-12-16T12:39:09.142Z] Copying: 720/1024 [MB] (30 MBps) [2024-12-16T12:39:10.073Z] Copying: 755/1024 [MB] (35 MBps) [2024-12-16T12:39:11.007Z] Copying: 790/1024 [MB] (34 MBps) [2024-12-16T12:39:11.943Z] Copying: 822/1024 [MB] (32 MBps) [2024-12-16T12:39:12.876Z] Copying: 855/1024 [MB] (32 MBps) [2024-12-16T12:39:14.249Z] Copying: 887/1024 [MB] (32 MBps) [2024-12-16T12:39:15.182Z] Copying: 917/1024 [MB] (29 MBps) [2024-12-16T12:39:16.113Z] Copying: 948/1024 [MB] (31 MBps) [2024-12-16T12:39:17.046Z] Copying: 981/1024 [MB] (33 MBps) [2024-12-16T12:39:17.612Z] Copying: 1009/1024 [MB] (28 MBps) [2024-12-16T12:39:18.180Z] Copying: 1024/1024 [MB] (average 30 MBps) 00:27:11.074 00:27:11.074 12:39:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:27:11.074 12:39:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:27:11.074 12:39:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:11.336 [2024-12-16 12:39:18.293093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.336 [2024-12-16 12:39:18.293142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:11.336 [2024-12-16 12:39:18.293154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:11.336 [2024-12-16 12:39:18.293175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.336 [2024-12-16 12:39:18.293196] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:11.336 [2024-12-16 12:39:18.295450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.336 [2024-12-16 12:39:18.295475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:11.336 [2024-12-16 12:39:18.295485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.238 ms 00:27:11.336 [2024-12-16 12:39:18.295492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.336 [2024-12-16 12:39:18.297691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.336 [2024-12-16 12:39:18.297719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:11.336 [2024-12-16 12:39:18.297728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.176 ms 00:27:11.336 [2024-12-16 12:39:18.297734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.336 [2024-12-16 12:39:18.313747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.336 [2024-12-16 12:39:18.313777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:11.336 [2024-12-16 12:39:18.313787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.996 ms 00:27:11.336 [2024-12-16 12:39:18.313793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.336 [2024-12-16 12:39:18.318431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.336 [2024-12-16 12:39:18.318454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:11.336 [2024-12-16 12:39:18.318464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.609 ms 00:27:11.336 [2024-12-16 12:39:18.318471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.336 [2024-12-16 12:39:18.338008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.336 [2024-12-16 12:39:18.338142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:11.336 [2024-12-16 12:39:18.338169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.483 ms 00:27:11.336 [2024-12-16 12:39:18.338176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.336 [2024-12-16 12:39:18.351260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.336 [2024-12-16 12:39:18.351336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:11.336 [2024-12-16 12:39:18.351349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.053 ms 00:27:11.336 [2024-12-16 12:39:18.351355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.336 [2024-12-16 12:39:18.351479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.336 [2024-12-16 12:39:18.351489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:11.337 [2024-12-16 12:39:18.351497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:27:11.337 [2024-12-16 12:39:18.351504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.337 [2024-12-16 12:39:18.369873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.337 [2024-12-16 12:39:18.369897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:11.337 [2024-12-16 12:39:18.369907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.355 ms 00:27:11.337 [2024-12-16 12:39:18.369913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.337 [2024-12-16 12:39:18.387652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.337 [2024-12-16 12:39:18.387676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:11.337 [2024-12-16 12:39:18.387684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.710 ms 00:27:11.337 [2024-12-16 12:39:18.387690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.337 [2024-12-16 12:39:18.405048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.337 [2024-12-16 12:39:18.405149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:11.337 [2024-12-16 12:39:18.405171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.327 ms 00:27:11.337 [2024-12-16 12:39:18.405177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.337 [2024-12-16 12:39:18.422697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.337 [2024-12-16 12:39:18.422721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:11.337 [2024-12-16 12:39:18.422730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.465 ms 00:27:11.337 [2024-12-16 12:39:18.422735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.337 [2024-12-16 12:39:18.422763] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:11.337 [2024-12-16 12:39:18.422776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.422996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:11.337 [2024-12-16 12:39:18.423330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:11.338 [2024-12-16 12:39:18.423514] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:11.338 [2024-12-16 12:39:18.423522] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 37b5d6ac-e9c9-4bd5-8ad5-f8a6eec59538 00:27:11.338 [2024-12-16 12:39:18.423528] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:11.338 [2024-12-16 12:39:18.423537] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:11.338 [2024-12-16 12:39:18.423543] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:11.338 [2024-12-16 12:39:18.423551] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:11.338 [2024-12-16 12:39:18.423557] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:11.338 [2024-12-16 12:39:18.423564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:11.338 [2024-12-16 12:39:18.423570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:11.338 [2024-12-16 12:39:18.423576] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:11.338 [2024-12-16 12:39:18.423580] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:11.338 [2024-12-16 12:39:18.423587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.338 [2024-12-16 12:39:18.423594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:11.338 [2024-12-16 12:39:18.423601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:27:11.338 [2024-12-16 12:39:18.423607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.338 [2024-12-16 12:39:18.433902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.338 [2024-12-16 12:39:18.433929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:11.338 [2024-12-16 12:39:18.433939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.271 ms 00:27:11.338 [2024-12-16 12:39:18.433945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.338 [2024-12-16 12:39:18.434258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.338 [2024-12-16 12:39:18.434270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:11.338 [2024-12-16 12:39:18.434279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:27:11.338 [2024-12-16 12:39:18.434285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.599 [2024-12-16 12:39:18.469002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.599 [2024-12-16 12:39:18.469030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:11.599 [2024-12-16 12:39:18.469039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.599 [2024-12-16 12:39:18.469045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.599 [2024-12-16 12:39:18.469096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.599 [2024-12-16 12:39:18.469103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:11.599 [2024-12-16 12:39:18.469110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.599 [2024-12-16 12:39:18.469116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.599 [2024-12-16 12:39:18.469193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.599 [2024-12-16 12:39:18.469204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:11.599 [2024-12-16 12:39:18.469212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.599 [2024-12-16 12:39:18.469219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.599 [2024-12-16 12:39:18.469236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.599 [2024-12-16 12:39:18.469242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:11.599 [2024-12-16 12:39:18.469250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.599 [2024-12-16 12:39:18.469256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.599 [2024-12-16 12:39:18.531805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.599 [2024-12-16 12:39:18.531840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:11.599 [2024-12-16 12:39:18.531850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.600 [2024-12-16 12:39:18.531857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.600 [2024-12-16 12:39:18.582384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.600 [2024-12-16 12:39:18.582420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:11.600 [2024-12-16 12:39:18.582431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.600 [2024-12-16 12:39:18.582437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.600 [2024-12-16 12:39:18.582546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.600 [2024-12-16 12:39:18.582555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:11.600 [2024-12-16 12:39:18.582565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.600 [2024-12-16 12:39:18.582572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.600 [2024-12-16 12:39:18.582612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.600 [2024-12-16 12:39:18.582621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:11.600 [2024-12-16 12:39:18.582629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.600 [2024-12-16 12:39:18.582636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.600 [2024-12-16 12:39:18.582711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.600 [2024-12-16 12:39:18.582719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:11.600 [2024-12-16 12:39:18.582727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.600 [2024-12-16 12:39:18.582735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.600 [2024-12-16 12:39:18.582763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.600 [2024-12-16 12:39:18.582770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:11.600 [2024-12-16 12:39:18.582777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.600 [2024-12-16 12:39:18.582783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.600 [2024-12-16 12:39:18.582818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.600 [2024-12-16 12:39:18.582826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:11.600 [2024-12-16 12:39:18.582834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.600 [2024-12-16 12:39:18.582842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.600 [2024-12-16 12:39:18.582886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.600 [2024-12-16 12:39:18.582895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:11.600 [2024-12-16 12:39:18.582902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.600 [2024-12-16 12:39:18.582909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.600 [2024-12-16 12:39:18.583023] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 289.892 ms, result 0 00:27:11.600 true 00:27:11.600 12:39:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 82645 00:27:11.600 12:39:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid82645 00:27:11.600 12:39:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:27:11.600 [2024-12-16 12:39:18.673690] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:27:11.600 [2024-12-16 12:39:18.673807] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83235 ] 00:27:11.861 [2024-12-16 12:39:18.831090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.861 [2024-12-16 12:39:18.927151] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.247  [2024-12-16T12:39:21.295Z] Copying: 250/1024 [MB] (250 MBps) [2024-12-16T12:39:22.238Z] Copying: 503/1024 [MB] (252 MBps) [2024-12-16T12:39:23.223Z] Copying: 753/1024 [MB] (250 MBps) [2024-12-16T12:39:23.484Z] Copying: 996/1024 [MB] (243 MBps) [2024-12-16T12:39:24.055Z] Copying: 1024/1024 [MB] (average 248 MBps) 00:27:16.949 00:27:16.949 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 82645 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:27:16.949 12:39:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:16.949 [2024-12-16 12:39:23.917457] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:27:16.949 [2024-12-16 12:39:23.917576] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83293 ] 00:27:17.210 [2024-12-16 12:39:24.072311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.210 [2024-12-16 12:39:24.164658] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.470 [2024-12-16 12:39:24.397859] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:17.470 [2024-12-16 12:39:24.397919] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:17.470 [2024-12-16 12:39:24.461293] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:17.470 [2024-12-16 12:39:24.461671] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:17.470 [2024-12-16 12:39:24.462226] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:18.046 [2024-12-16 12:39:24.841776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.046 [2024-12-16 12:39:24.841834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:18.046 [2024-12-16 12:39:24.841852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:18.046 [2024-12-16 12:39:24.841865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.046 [2024-12-16 12:39:24.841925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.046 [2024-12-16 12:39:24.841936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:18.046 [2024-12-16 12:39:24.841945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:18.046 [2024-12-16 12:39:24.841954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.046 [2024-12-16 12:39:24.841976] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:18.046 [2024-12-16 12:39:24.842974] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:18.046 [2024-12-16 12:39:24.843029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.046 [2024-12-16 12:39:24.843040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:18.046 [2024-12-16 12:39:24.843052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.059 ms 00:27:18.046 [2024-12-16 12:39:24.843061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.046 [2024-12-16 12:39:24.845663] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:18.046 [2024-12-16 12:39:24.861181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.046 [2024-12-16 12:39:24.861399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:18.046 [2024-12-16 12:39:24.861620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.520 ms 00:27:18.046 [2024-12-16 12:39:24.861662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.046 [2024-12-16 12:39:24.861765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.046 [2024-12-16 12:39:24.861792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:18.046 [2024-12-16 12:39:24.861882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:18.046 [2024-12-16 12:39:24.861906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.046 [2024-12-16 12:39:24.873304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.046 [2024-12-16 12:39:24.873493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:18.046 [2024-12-16 12:39:24.873557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.294 ms 00:27:18.046 [2024-12-16 12:39:24.873582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.046 [2024-12-16 12:39:24.873697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.046 [2024-12-16 12:39:24.873866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:18.046 [2024-12-16 12:39:24.873880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:27:18.046 [2024-12-16 12:39:24.873889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.046 [2024-12-16 12:39:24.873961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.046 [2024-12-16 12:39:24.873972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:18.046 [2024-12-16 12:39:24.873981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:18.046 [2024-12-16 12:39:24.873991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.046 [2024-12-16 12:39:24.874016] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:18.046 [2024-12-16 12:39:24.878754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.046 [2024-12-16 12:39:24.878796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:18.046 [2024-12-16 12:39:24.878807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.746 ms 00:27:18.046 [2024-12-16 12:39:24.878815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.046 [2024-12-16 12:39:24.878856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.046 [2024-12-16 12:39:24.878866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:18.046 [2024-12-16 12:39:24.878875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:18.046 [2024-12-16 12:39:24.878884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.046 [2024-12-16 12:39:24.878926] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:18.046 [2024-12-16 12:39:24.878956] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:18.046 [2024-12-16 12:39:24.878997] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:18.046 [2024-12-16 12:39:24.879021] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:18.046 [2024-12-16 12:39:24.879136] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:18.046 [2024-12-16 12:39:24.879149] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:18.046 [2024-12-16 12:39:24.879178] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:18.046 [2024-12-16 12:39:24.879194] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:18.046 [2024-12-16 12:39:24.879205] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:18.046 [2024-12-16 12:39:24.879215] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:18.046 [2024-12-16 12:39:24.879223] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:18.046 [2024-12-16 12:39:24.879231] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:18.046 [2024-12-16 12:39:24.879240] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:18.046 [2024-12-16 12:39:24.879248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.046 [2024-12-16 12:39:24.879257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:18.046 [2024-12-16 12:39:24.879267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:27:18.046 [2024-12-16 12:39:24.879275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.046 [2024-12-16 12:39:24.879361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.046 [2024-12-16 12:39:24.879375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:18.047 [2024-12-16 12:39:24.879384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:27:18.047 [2024-12-16 12:39:24.879391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.047 [2024-12-16 12:39:24.879494] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:18.047 [2024-12-16 12:39:24.879506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:18.047 [2024-12-16 12:39:24.879516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:18.047 [2024-12-16 12:39:24.879524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.047 [2024-12-16 12:39:24.879533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:18.047 [2024-12-16 12:39:24.879541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:18.047 [2024-12-16 12:39:24.879550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:18.047 [2024-12-16 12:39:24.879559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:18.047 [2024-12-16 12:39:24.879567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:18.047 [2024-12-16 12:39:24.879582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:18.047 [2024-12-16 12:39:24.879591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:18.047 [2024-12-16 12:39:24.879599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:18.047 [2024-12-16 12:39:24.879606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:18.047 [2024-12-16 12:39:24.879612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:18.047 [2024-12-16 12:39:24.879620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:18.047 [2024-12-16 12:39:24.879627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.047 [2024-12-16 12:39:24.879636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:18.047 [2024-12-16 12:39:24.879645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:18.047 [2024-12-16 12:39:24.879652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.047 [2024-12-16 12:39:24.879660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:18.047 [2024-12-16 12:39:24.879666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:18.047 [2024-12-16 12:39:24.879673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:18.047 [2024-12-16 12:39:24.879679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:18.047 [2024-12-16 12:39:24.879686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:18.047 [2024-12-16 12:39:24.879693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:18.047 [2024-12-16 12:39:24.879700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:18.047 [2024-12-16 12:39:24.879708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:18.047 [2024-12-16 12:39:24.879715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:18.047 [2024-12-16 12:39:24.879722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:18.047 [2024-12-16 12:39:24.879728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:18.047 [2024-12-16 12:39:24.879735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:18.047 [2024-12-16 12:39:24.879741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:18.047 [2024-12-16 12:39:24.879748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:18.047 [2024-12-16 12:39:24.879756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:18.047 [2024-12-16 12:39:24.879763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:18.047 [2024-12-16 12:39:24.879769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:18.047 [2024-12-16 12:39:24.879776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:18.047 [2024-12-16 12:39:24.879783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:18.047 [2024-12-16 12:39:24.879789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:18.047 [2024-12-16 12:39:24.879797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.047 [2024-12-16 12:39:24.879805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:18.047 [2024-12-16 12:39:24.879812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:18.047 [2024-12-16 12:39:24.879820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.047 [2024-12-16 12:39:24.879827] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:18.047 [2024-12-16 12:39:24.879835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:18.047 [2024-12-16 12:39:24.879847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:18.047 [2024-12-16 12:39:24.879854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.047 [2024-12-16 12:39:24.879862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:18.047 [2024-12-16 12:39:24.879877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:18.047 [2024-12-16 12:39:24.879885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:18.047 [2024-12-16 12:39:24.879892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:18.047 [2024-12-16 12:39:24.879899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:18.047 [2024-12-16 12:39:24.879907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:18.047 [2024-12-16 12:39:24.879915] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:18.047 [2024-12-16 12:39:24.879925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:18.047 [2024-12-16 12:39:24.879934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:18.047 [2024-12-16 12:39:24.879941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:18.047 [2024-12-16 12:39:24.879950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:18.047 [2024-12-16 12:39:24.879957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:18.047 [2024-12-16 12:39:24.879965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:18.047 [2024-12-16 12:39:24.879972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:18.047 [2024-12-16 12:39:24.879979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:18.047 [2024-12-16 12:39:24.879986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:18.047 [2024-12-16 12:39:24.879995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:18.047 [2024-12-16 12:39:24.880003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:18.047 [2024-12-16 12:39:24.880011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:18.047 [2024-12-16 12:39:24.880018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:18.047 [2024-12-16 12:39:24.880025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:18.047 [2024-12-16 12:39:24.880033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:18.047 [2024-12-16 12:39:24.880040] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:18.047 [2024-12-16 12:39:24.880049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:18.047 [2024-12-16 12:39:24.880058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:18.047 [2024-12-16 12:39:24.880066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:18.047 [2024-12-16 12:39:24.880074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:18.047 [2024-12-16 12:39:24.880082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:18.047 [2024-12-16 12:39:24.880091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.047 [2024-12-16 12:39:24.880100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:18.047 [2024-12-16 12:39:24.880108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:27:18.047 [2024-12-16 12:39:24.880116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.047 [2024-12-16 12:39:24.918818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.047 [2024-12-16 12:39:24.918997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:18.047 [2024-12-16 12:39:24.919056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.386 ms 00:27:18.047 [2024-12-16 12:39:24.919081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.047 [2024-12-16 12:39:24.919225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.047 [2024-12-16 12:39:24.919251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:18.047 [2024-12-16 12:39:24.919272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:27:18.047 [2024-12-16 12:39:24.919395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.047 [2024-12-16 12:39:24.970375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.047 [2024-12-16 12:39:24.970580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:18.047 [2024-12-16 12:39:24.970742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.889 ms 00:27:18.047 [2024-12-16 12:39:24.970767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.047 [2024-12-16 12:39:24.970832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.047 [2024-12-16 12:39:24.970858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:18.047 [2024-12-16 12:39:24.970879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:18.047 [2024-12-16 12:39:24.970898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.047 [2024-12-16 12:39:24.971694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.048 [2024-12-16 12:39:24.971840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:18.048 [2024-12-16 12:39:24.971908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:27:18.048 [2024-12-16 12:39:24.971941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.048 [2024-12-16 12:39:24.972126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.048 [2024-12-16 12:39:24.972182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:18.048 [2024-12-16 12:39:24.972372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:27:18.048 [2024-12-16 12:39:24.972458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.048 [2024-12-16 12:39:24.990764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.048 [2024-12-16 12:39:24.990916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:18.048 [2024-12-16 12:39:24.990935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.257 ms 00:27:18.048 [2024-12-16 12:39:24.990944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.048 [2024-12-16 12:39:25.006237] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:18.048 [2024-12-16 12:39:25.006286] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:18.048 [2024-12-16 12:39:25.006302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.048 [2024-12-16 12:39:25.006312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:18.048 [2024-12-16 12:39:25.006323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.236 ms 00:27:18.048 [2024-12-16 12:39:25.006332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.048 [2024-12-16 12:39:25.032816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.048 [2024-12-16 12:39:25.032864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:18.048 [2024-12-16 12:39:25.032877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.429 ms 00:27:18.048 [2024-12-16 12:39:25.032887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.048 [2024-12-16 12:39:25.045665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.048 [2024-12-16 12:39:25.045712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:18.048 [2024-12-16 12:39:25.045724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.723 ms 00:27:18.048 [2024-12-16 12:39:25.045733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.048 [2024-12-16 12:39:25.058564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.048 [2024-12-16 12:39:25.058607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:18.048 [2024-12-16 12:39:25.058620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.783 ms 00:27:18.048 [2024-12-16 12:39:25.058628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.048 [2024-12-16 12:39:25.059315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.048 [2024-12-16 12:39:25.059343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:18.048 [2024-12-16 12:39:25.059354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:27:18.048 [2024-12-16 12:39:25.059362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.048 [2024-12-16 12:39:25.132982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.048 [2024-12-16 12:39:25.133249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:18.048 [2024-12-16 12:39:25.133277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.598 ms 00:27:18.048 [2024-12-16 12:39:25.133288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.048 [2024-12-16 12:39:25.145381] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:18.311 [2024-12-16 12:39:25.149388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.311 [2024-12-16 12:39:25.149432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:18.311 [2024-12-16 12:39:25.149446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.870 ms 00:27:18.311 [2024-12-16 12:39:25.149463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.311 [2024-12-16 12:39:25.149560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.311 [2024-12-16 12:39:25.149573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:18.311 [2024-12-16 12:39:25.149584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:18.311 [2024-12-16 12:39:25.149593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.311 [2024-12-16 12:39:25.149675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.311 [2024-12-16 12:39:25.149687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:18.311 [2024-12-16 12:39:25.149696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:18.311 [2024-12-16 12:39:25.149705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.311 [2024-12-16 12:39:25.149731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.311 [2024-12-16 12:39:25.149741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:18.311 [2024-12-16 12:39:25.149752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:18.311 [2024-12-16 12:39:25.149762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.311 [2024-12-16 12:39:25.149807] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:18.311 [2024-12-16 12:39:25.149818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.311 [2024-12-16 12:39:25.149828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:18.311 [2024-12-16 12:39:25.149839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:18.311 [2024-12-16 12:39:25.149853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.311 [2024-12-16 12:39:25.176270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.311 [2024-12-16 12:39:25.176472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:18.311 [2024-12-16 12:39:25.176495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.398 ms 00:27:18.311 [2024-12-16 12:39:25.176504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.311 [2024-12-16 12:39:25.176585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.311 [2024-12-16 12:39:25.176595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:18.311 [2024-12-16 12:39:25.176604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:18.311 [2024-12-16 12:39:25.176613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.311 [2024-12-16 12:39:25.178672] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 336.046 ms, result 0 00:27:19.256  [2024-12-16T12:39:27.307Z] Copying: 14/1024 [MB] (14 MBps) [2024-12-16T12:39:28.248Z] Copying: 32/1024 [MB] (18 MBps) [2024-12-16T12:39:29.630Z] Copying: 56/1024 [MB] (23 MBps) [2024-12-16T12:39:30.203Z] Copying: 79/1024 [MB] (23 MBps) [2024-12-16T12:39:31.590Z] Copying: 99/1024 [MB] (20 MBps) [2024-12-16T12:39:32.534Z] Copying: 120/1024 [MB] (20 MBps) [2024-12-16T12:39:33.479Z] Copying: 137/1024 [MB] (16 MBps) [2024-12-16T12:39:34.422Z] Copying: 147/1024 [MB] (10 MBps) [2024-12-16T12:39:35.363Z] Copying: 159/1024 [MB] (11 MBps) [2024-12-16T12:39:36.305Z] Copying: 170/1024 [MB] (11 MBps) [2024-12-16T12:39:37.250Z] Copying: 180/1024 [MB] (10 MBps) [2024-12-16T12:39:38.194Z] Copying: 191/1024 [MB] (10 MBps) [2024-12-16T12:39:39.581Z] Copying: 202/1024 [MB] (10 MBps) [2024-12-16T12:39:40.525Z] Copying: 213/1024 [MB] (11 MBps) [2024-12-16T12:39:41.468Z] Copying: 224/1024 [MB] (11 MBps) [2024-12-16T12:39:42.412Z] Copying: 235/1024 [MB] (11 MBps) [2024-12-16T12:39:43.356Z] Copying: 247/1024 [MB] (11 MBps) [2024-12-16T12:39:44.298Z] Copying: 258/1024 [MB] (11 MBps) [2024-12-16T12:39:45.240Z] Copying: 269/1024 [MB] (10 MBps) [2024-12-16T12:39:46.627Z] Copying: 279/1024 [MB] (10 MBps) [2024-12-16T12:39:47.200Z] Copying: 290/1024 [MB] (11 MBps) [2024-12-16T12:39:48.638Z] Copying: 302/1024 [MB] (11 MBps) [2024-12-16T12:39:49.217Z] Copying: 313/1024 [MB] (11 MBps) [2024-12-16T12:39:50.605Z] Copying: 324/1024 [MB] (11 MBps) [2024-12-16T12:39:51.550Z] Copying: 335/1024 [MB] (11 MBps) [2024-12-16T12:39:52.494Z] Copying: 346/1024 [MB] (11 MBps) [2024-12-16T12:39:53.440Z] Copying: 357/1024 [MB] (11 MBps) [2024-12-16T12:39:54.385Z] Copying: 368/1024 [MB] (11 MBps) [2024-12-16T12:39:55.328Z] Copying: 379/1024 [MB] (10 MBps) [2024-12-16T12:39:56.270Z] Copying: 390/1024 [MB] (10 MBps) [2024-12-16T12:39:57.214Z] Copying: 401/1024 [MB] (10 MBps) [2024-12-16T12:39:58.600Z] Copying: 411/1024 [MB] (10 MBps) [2024-12-16T12:39:59.544Z] Copying: 422/1024 [MB] (10 MBps) [2024-12-16T12:40:00.489Z] Copying: 433/1024 [MB] (11 MBps) [2024-12-16T12:40:01.433Z] Copying: 445/1024 [MB] (11 MBps) [2024-12-16T12:40:02.375Z] Copying: 456/1024 [MB] (11 MBps) [2024-12-16T12:40:03.317Z] Copying: 468/1024 [MB] (11 MBps) [2024-12-16T12:40:04.262Z] Copying: 479/1024 [MB] (11 MBps) [2024-12-16T12:40:05.218Z] Copying: 490/1024 [MB] (11 MBps) [2024-12-16T12:40:06.606Z] Copying: 501/1024 [MB] (11 MBps) [2024-12-16T12:40:07.551Z] Copying: 512/1024 [MB] (11 MBps) [2024-12-16T12:40:08.496Z] Copying: 524/1024 [MB] (11 MBps) [2024-12-16T12:40:09.442Z] Copying: 535/1024 [MB] (11 MBps) [2024-12-16T12:40:10.385Z] Copying: 546/1024 [MB] (11 MBps) [2024-12-16T12:40:11.328Z] Copying: 557/1024 [MB] (11 MBps) [2024-12-16T12:40:12.272Z] Copying: 568/1024 [MB] (10 MBps) [2024-12-16T12:40:13.216Z] Copying: 579/1024 [MB] (11 MBps) [2024-12-16T12:40:14.621Z] Copying: 590/1024 [MB] (11 MBps) [2024-12-16T12:40:15.211Z] Copying: 601/1024 [MB] (11 MBps) [2024-12-16T12:40:16.598Z] Copying: 612/1024 [MB] (11 MBps) [2024-12-16T12:40:17.543Z] Copying: 623/1024 [MB] (11 MBps) [2024-12-16T12:40:18.487Z] Copying: 634/1024 [MB] (11 MBps) [2024-12-16T12:40:19.430Z] Copying: 646/1024 [MB] (11 MBps) [2024-12-16T12:40:20.375Z] Copying: 657/1024 [MB] (11 MBps) [2024-12-16T12:40:21.320Z] Copying: 667/1024 [MB] (10 MBps) [2024-12-16T12:40:22.262Z] Copying: 678/1024 [MB] (11 MBps) [2024-12-16T12:40:23.205Z] Copying: 689/1024 [MB] (11 MBps) [2024-12-16T12:40:24.592Z] Copying: 700/1024 [MB] (11 MBps) [2024-12-16T12:40:25.537Z] Copying: 712/1024 [MB] (11 MBps) [2024-12-16T12:40:26.479Z] Copying: 723/1024 [MB] (11 MBps) [2024-12-16T12:40:27.425Z] Copying: 735/1024 [MB] (11 MBps) [2024-12-16T12:40:28.370Z] Copying: 745/1024 [MB] (10 MBps) [2024-12-16T12:40:29.315Z] Copying: 755/1024 [MB] (10 MBps) [2024-12-16T12:40:30.260Z] Copying: 767/1024 [MB] (11 MBps) [2024-12-16T12:40:31.203Z] Copying: 778/1024 [MB] (11 MBps) [2024-12-16T12:40:32.592Z] Copying: 789/1024 [MB] (11 MBps) [2024-12-16T12:40:33.535Z] Copying: 799/1024 [MB] (10 MBps) [2024-12-16T12:40:34.479Z] Copying: 810/1024 [MB] (11 MBps) [2024-12-16T12:40:35.423Z] Copying: 822/1024 [MB] (11 MBps) [2024-12-16T12:40:36.368Z] Copying: 833/1024 [MB] (11 MBps) [2024-12-16T12:40:37.313Z] Copying: 844/1024 [MB] (11 MBps) [2024-12-16T12:40:38.258Z] Copying: 855/1024 [MB] (11 MBps) [2024-12-16T12:40:39.202Z] Copying: 866/1024 [MB] (10 MBps) [2024-12-16T12:40:40.653Z] Copying: 877/1024 [MB] (11 MBps) [2024-12-16T12:40:41.227Z] Copying: 889/1024 [MB] (11 MBps) [2024-12-16T12:40:42.613Z] Copying: 899/1024 [MB] (10 MBps) [2024-12-16T12:40:43.555Z] Copying: 910/1024 [MB] (10 MBps) [2024-12-16T12:40:44.499Z] Copying: 921/1024 [MB] (10 MBps) [2024-12-16T12:40:45.442Z] Copying: 933/1024 [MB] (12 MBps) [2024-12-16T12:40:46.385Z] Copying: 945/1024 [MB] (11 MBps) [2024-12-16T12:40:47.329Z] Copying: 958/1024 [MB] (13 MBps) [2024-12-16T12:40:48.271Z] Copying: 971/1024 [MB] (13 MBps) [2024-12-16T12:40:49.215Z] Copying: 984/1024 [MB] (12 MBps) [2024-12-16T12:40:50.602Z] Copying: 996/1024 [MB] (11 MBps) [2024-12-16T12:40:51.546Z] Copying: 1007/1024 [MB] (11 MBps) [2024-12-16T12:40:52.490Z] Copying: 1018/1024 [MB] (11 MBps) [2024-12-16T12:40:52.752Z] Copying: 1048180/1048576 [kB] (5260 kBps) [2024-12-16T12:40:52.752Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-12-16 12:40:52.598811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.646 [2024-12-16 12:40:52.598907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:45.646 [2024-12-16 12:40:52.598928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:45.646 [2024-12-16 12:40:52.598939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.646 [2024-12-16 12:40:52.603501] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:45.646 [2024-12-16 12:40:52.608239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.646 [2024-12-16 12:40:52.608304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:45.646 [2024-12-16 12:40:52.608320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.684 ms 00:28:45.646 [2024-12-16 12:40:52.608339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.646 [2024-12-16 12:40:52.620837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.646 [2024-12-16 12:40:52.621057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:45.646 [2024-12-16 12:40:52.621082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.130 ms 00:28:45.646 [2024-12-16 12:40:52.621094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.646 [2024-12-16 12:40:52.646033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.646 [2024-12-16 12:40:52.646097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:45.646 [2024-12-16 12:40:52.646111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.914 ms 00:28:45.646 [2024-12-16 12:40:52.646122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.646 [2024-12-16 12:40:52.652335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.646 [2024-12-16 12:40:52.652378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:45.646 [2024-12-16 12:40:52.652390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.140 ms 00:28:45.646 [2024-12-16 12:40:52.652399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.646 [2024-12-16 12:40:52.679493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.646 [2024-12-16 12:40:52.679707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:45.646 [2024-12-16 12:40:52.679728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.042 ms 00:28:45.646 [2024-12-16 12:40:52.679737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.646 [2024-12-16 12:40:52.697078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.646 [2024-12-16 12:40:52.697127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:45.646 [2024-12-16 12:40:52.697140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.214 ms 00:28:45.646 [2024-12-16 12:40:52.697149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.907 [2024-12-16 12:40:52.985899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.907 [2024-12-16 12:40:52.985950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:45.907 [2024-12-16 12:40:52.985969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 288.672 ms 00:28:45.907 [2024-12-16 12:40:52.985978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.170 [2024-12-16 12:40:53.011690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.170 [2024-12-16 12:40:53.011735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:46.170 [2024-12-16 12:40:53.011747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.695 ms 00:28:46.170 [2024-12-16 12:40:53.011768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.170 [2024-12-16 12:40:53.037370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.170 [2024-12-16 12:40:53.037566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:46.170 [2024-12-16 12:40:53.037586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.556 ms 00:28:46.170 [2024-12-16 12:40:53.037594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.170 [2024-12-16 12:40:53.062287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.170 [2024-12-16 12:40:53.062343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:46.170 [2024-12-16 12:40:53.062357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.417 ms 00:28:46.170 [2024-12-16 12:40:53.062366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.170 [2024-12-16 12:40:53.086462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.170 [2024-12-16 12:40:53.086508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:46.170 [2024-12-16 12:40:53.086520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.018 ms 00:28:46.170 [2024-12-16 12:40:53.086528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.170 [2024-12-16 12:40:53.086572] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:46.170 [2024-12-16 12:40:53.086588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 104960 / 261120 wr_cnt: 1 state: open 00:28:46.170 [2024-12-16 12:40:53.086600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.086993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.087001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.087009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.087016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.087023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.087030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.087039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.087046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.087053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.087063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.087072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.087080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.087089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.087097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.087105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.087113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.087121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.087130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.087138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:46.170 [2024-12-16 12:40:53.087146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:46.171 [2024-12-16 12:40:53.087430] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:46.171 [2024-12-16 12:40:53.087439] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 37b5d6ac-e9c9-4bd5-8ad5-f8a6eec59538 00:28:46.171 [2024-12-16 12:40:53.087462] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 104960 00:28:46.171 [2024-12-16 12:40:53.087470] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 105920 00:28:46.171 [2024-12-16 12:40:53.087477] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 104960 00:28:46.171 [2024-12-16 12:40:53.087487] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0091 00:28:46.171 [2024-12-16 12:40:53.087494] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:46.171 [2024-12-16 12:40:53.087503] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:46.171 [2024-12-16 12:40:53.087529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:46.171 [2024-12-16 12:40:53.087536] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:46.171 [2024-12-16 12:40:53.087543] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:46.171 [2024-12-16 12:40:53.087550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.171 [2024-12-16 12:40:53.087558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:46.171 [2024-12-16 12:40:53.087569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:28:46.171 [2024-12-16 12:40:53.087578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.171 [2024-12-16 12:40:53.102041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.171 [2024-12-16 12:40:53.102082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:46.171 [2024-12-16 12:40:53.102094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.443 ms 00:28:46.171 [2024-12-16 12:40:53.102103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.171 [2024-12-16 12:40:53.102557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.171 [2024-12-16 12:40:53.102572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:46.171 [2024-12-16 12:40:53.102582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:28:46.171 [2024-12-16 12:40:53.102598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.171 [2024-12-16 12:40:53.138126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.171 [2024-12-16 12:40:53.138184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:46.171 [2024-12-16 12:40:53.138194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.171 [2024-12-16 12:40:53.138203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.171 [2024-12-16 12:40:53.138258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.171 [2024-12-16 12:40:53.138266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:46.171 [2024-12-16 12:40:53.138274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.171 [2024-12-16 12:40:53.138286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.171 [2024-12-16 12:40:53.138344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.171 [2024-12-16 12:40:53.138354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:46.171 [2024-12-16 12:40:53.138361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.171 [2024-12-16 12:40:53.138369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.171 [2024-12-16 12:40:53.138383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.171 [2024-12-16 12:40:53.138390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:46.171 [2024-12-16 12:40:53.138397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.171 [2024-12-16 12:40:53.138404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.171 [2024-12-16 12:40:53.208522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.171 [2024-12-16 12:40:53.208725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:46.171 [2024-12-16 12:40:53.208742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.171 [2024-12-16 12:40:53.208749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.171 [2024-12-16 12:40:53.261150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.171 [2024-12-16 12:40:53.261191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:46.171 [2024-12-16 12:40:53.261200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.171 [2024-12-16 12:40:53.261211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.171 [2024-12-16 12:40:53.261280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.171 [2024-12-16 12:40:53.261287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:46.171 [2024-12-16 12:40:53.261294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.171 [2024-12-16 12:40:53.261301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.171 [2024-12-16 12:40:53.261331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.171 [2024-12-16 12:40:53.261337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:46.171 [2024-12-16 12:40:53.261352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.171 [2024-12-16 12:40:53.261359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.171 [2024-12-16 12:40:53.261439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.171 [2024-12-16 12:40:53.261449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:46.171 [2024-12-16 12:40:53.261456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.172 [2024-12-16 12:40:53.261462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.172 [2024-12-16 12:40:53.261486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.172 [2024-12-16 12:40:53.261493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:46.172 [2024-12-16 12:40:53.261500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.172 [2024-12-16 12:40:53.261505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.172 [2024-12-16 12:40:53.261547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.172 [2024-12-16 12:40:53.261554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:46.172 [2024-12-16 12:40:53.261561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.172 [2024-12-16 12:40:53.261567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.172 [2024-12-16 12:40:53.261609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.172 [2024-12-16 12:40:53.261618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:46.172 [2024-12-16 12:40:53.261625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.172 [2024-12-16 12:40:53.261631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.172 [2024-12-16 12:40:53.261742] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 664.487 ms, result 0 00:28:47.557 00:28:47.557 00:28:47.557 12:40:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:49.472 12:40:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:49.473 [2024-12-16 12:40:56.568353] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:28:49.473 [2024-12-16 12:40:56.568611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84228 ] 00:28:49.733 [2024-12-16 12:40:56.726683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.733 [2024-12-16 12:40:56.817384] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.993 [2024-12-16 12:40:57.050643] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:49.993 [2024-12-16 12:40:57.050700] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:50.255 [2024-12-16 12:40:57.206686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.255 [2024-12-16 12:40:57.206859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:50.255 [2024-12-16 12:40:57.206877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:50.255 [2024-12-16 12:40:57.206884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.255 [2024-12-16 12:40:57.206932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.255 [2024-12-16 12:40:57.206942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:50.255 [2024-12-16 12:40:57.206949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:28:50.255 [2024-12-16 12:40:57.206955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.255 [2024-12-16 12:40:57.206970] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:50.255 [2024-12-16 12:40:57.207539] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:50.255 [2024-12-16 12:40:57.207553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.255 [2024-12-16 12:40:57.207559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:50.255 [2024-12-16 12:40:57.207566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:28:50.255 [2024-12-16 12:40:57.207572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.255 [2024-12-16 12:40:57.208832] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:50.255 [2024-12-16 12:40:57.219352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.255 [2024-12-16 12:40:57.219382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:50.255 [2024-12-16 12:40:57.219391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.521 ms 00:28:50.255 [2024-12-16 12:40:57.219397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.255 [2024-12-16 12:40:57.219445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.255 [2024-12-16 12:40:57.219453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:50.255 [2024-12-16 12:40:57.219460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:50.255 [2024-12-16 12:40:57.219466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.255 [2024-12-16 12:40:57.225723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.255 [2024-12-16 12:40:57.225858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:50.255 [2024-12-16 12:40:57.225870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.216 ms 00:28:50.255 [2024-12-16 12:40:57.225880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.255 [2024-12-16 12:40:57.225937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.255 [2024-12-16 12:40:57.225944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:50.255 [2024-12-16 12:40:57.225951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:28:50.255 [2024-12-16 12:40:57.225956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.255 [2024-12-16 12:40:57.225994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.255 [2024-12-16 12:40:57.226002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:50.255 [2024-12-16 12:40:57.226009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:50.255 [2024-12-16 12:40:57.226014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.255 [2024-12-16 12:40:57.226032] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:50.255 [2024-12-16 12:40:57.229062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.255 [2024-12-16 12:40:57.229166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:50.255 [2024-12-16 12:40:57.229183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.035 ms 00:28:50.255 [2024-12-16 12:40:57.229189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.255 [2024-12-16 12:40:57.229220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.255 [2024-12-16 12:40:57.229228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:50.255 [2024-12-16 12:40:57.229235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:50.255 [2024-12-16 12:40:57.229240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.255 [2024-12-16 12:40:57.229254] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:50.255 [2024-12-16 12:40:57.229271] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:50.255 [2024-12-16 12:40:57.229299] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:50.255 [2024-12-16 12:40:57.229313] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:50.255 [2024-12-16 12:40:57.229411] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:50.255 [2024-12-16 12:40:57.229420] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:50.255 [2024-12-16 12:40:57.229429] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:50.255 [2024-12-16 12:40:57.229437] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:50.255 [2024-12-16 12:40:57.229444] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:50.255 [2024-12-16 12:40:57.229451] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:50.255 [2024-12-16 12:40:57.229457] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:50.255 [2024-12-16 12:40:57.229463] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:50.255 [2024-12-16 12:40:57.229471] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:50.255 [2024-12-16 12:40:57.229477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.255 [2024-12-16 12:40:57.229482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:50.255 [2024-12-16 12:40:57.229488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:28:50.255 [2024-12-16 12:40:57.229494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.255 [2024-12-16 12:40:57.229558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.255 [2024-12-16 12:40:57.229565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:50.256 [2024-12-16 12:40:57.229571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:50.256 [2024-12-16 12:40:57.229577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.256 [2024-12-16 12:40:57.229652] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:50.256 [2024-12-16 12:40:57.229660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:50.256 [2024-12-16 12:40:57.229666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:50.256 [2024-12-16 12:40:57.229673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:50.256 [2024-12-16 12:40:57.229679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:50.256 [2024-12-16 12:40:57.229684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:50.256 [2024-12-16 12:40:57.229691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:50.256 [2024-12-16 12:40:57.229696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:50.256 [2024-12-16 12:40:57.229702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:50.256 [2024-12-16 12:40:57.229707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:50.256 [2024-12-16 12:40:57.229713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:50.256 [2024-12-16 12:40:57.229722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:50.256 [2024-12-16 12:40:57.229728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:50.256 [2024-12-16 12:40:57.229739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:50.256 [2024-12-16 12:40:57.229744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:50.256 [2024-12-16 12:40:57.229749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:50.256 [2024-12-16 12:40:57.229754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:50.256 [2024-12-16 12:40:57.229760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:50.256 [2024-12-16 12:40:57.229765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:50.256 [2024-12-16 12:40:57.229770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:50.256 [2024-12-16 12:40:57.229776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:50.256 [2024-12-16 12:40:57.229781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:50.256 [2024-12-16 12:40:57.229786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:50.256 [2024-12-16 12:40:57.229792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:50.256 [2024-12-16 12:40:57.229797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:50.256 [2024-12-16 12:40:57.229802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:50.256 [2024-12-16 12:40:57.229808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:50.256 [2024-12-16 12:40:57.229813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:50.256 [2024-12-16 12:40:57.229818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:50.256 [2024-12-16 12:40:57.229823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:50.256 [2024-12-16 12:40:57.229828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:50.256 [2024-12-16 12:40:57.229833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:50.256 [2024-12-16 12:40:57.229838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:50.256 [2024-12-16 12:40:57.229843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:50.256 [2024-12-16 12:40:57.229849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:50.256 [2024-12-16 12:40:57.229855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:50.256 [2024-12-16 12:40:57.229860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:50.256 [2024-12-16 12:40:57.229865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:50.256 [2024-12-16 12:40:57.229870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:50.256 [2024-12-16 12:40:57.229875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:50.256 [2024-12-16 12:40:57.229881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:50.256 [2024-12-16 12:40:57.229886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:50.256 [2024-12-16 12:40:57.229891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:50.256 [2024-12-16 12:40:57.229899] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:50.256 [2024-12-16 12:40:57.229905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:50.256 [2024-12-16 12:40:57.229912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:50.256 [2024-12-16 12:40:57.229917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:50.256 [2024-12-16 12:40:57.229923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:50.256 [2024-12-16 12:40:57.229928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:50.256 [2024-12-16 12:40:57.229933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:50.256 [2024-12-16 12:40:57.229939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:50.256 [2024-12-16 12:40:57.229944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:50.256 [2024-12-16 12:40:57.229950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:50.256 [2024-12-16 12:40:57.229956] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:50.256 [2024-12-16 12:40:57.229963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:50.256 [2024-12-16 12:40:57.229972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:50.256 [2024-12-16 12:40:57.229977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:50.256 [2024-12-16 12:40:57.229983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:50.256 [2024-12-16 12:40:57.229988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:50.256 [2024-12-16 12:40:57.229993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:50.256 [2024-12-16 12:40:57.229999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:50.256 [2024-12-16 12:40:57.230005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:50.256 [2024-12-16 12:40:57.230010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:50.256 [2024-12-16 12:40:57.230015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:50.256 [2024-12-16 12:40:57.230021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:50.256 [2024-12-16 12:40:57.230026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:50.256 [2024-12-16 12:40:57.230032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:50.256 [2024-12-16 12:40:57.230037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:50.256 [2024-12-16 12:40:57.230044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:50.256 [2024-12-16 12:40:57.230050] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:50.256 [2024-12-16 12:40:57.230056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:50.256 [2024-12-16 12:40:57.230063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:50.256 [2024-12-16 12:40:57.230069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:50.256 [2024-12-16 12:40:57.230075] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:50.256 [2024-12-16 12:40:57.230081] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:50.256 [2024-12-16 12:40:57.230088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.256 [2024-12-16 12:40:57.230094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:50.256 [2024-12-16 12:40:57.230100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:28:50.256 [2024-12-16 12:40:57.230106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.256 [2024-12-16 12:40:57.254829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.256 [2024-12-16 12:40:57.254863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:50.256 [2024-12-16 12:40:57.254873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.679 ms 00:28:50.256 [2024-12-16 12:40:57.254882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.256 [2024-12-16 12:40:57.254956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.256 [2024-12-16 12:40:57.254963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:50.256 [2024-12-16 12:40:57.254970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:28:50.256 [2024-12-16 12:40:57.254976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.256 [2024-12-16 12:40:57.292779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.256 [2024-12-16 12:40:57.292809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:50.256 [2024-12-16 12:40:57.292819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.757 ms 00:28:50.256 [2024-12-16 12:40:57.292826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.256 [2024-12-16 12:40:57.292858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.256 [2024-12-16 12:40:57.292866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:50.256 [2024-12-16 12:40:57.292875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:50.256 [2024-12-16 12:40:57.292881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.256 [2024-12-16 12:40:57.293330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.256 [2024-12-16 12:40:57.293352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:50.256 [2024-12-16 12:40:57.293360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:28:50.256 [2024-12-16 12:40:57.293366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.257 [2024-12-16 12:40:57.293477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.257 [2024-12-16 12:40:57.293487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:50.257 [2024-12-16 12:40:57.293494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:28:50.257 [2024-12-16 12:40:57.293502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.257 [2024-12-16 12:40:57.305590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.257 [2024-12-16 12:40:57.305616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:50.257 [2024-12-16 12:40:57.305627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.072 ms 00:28:50.257 [2024-12-16 12:40:57.305633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.257 [2024-12-16 12:40:57.316307] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:50.257 [2024-12-16 12:40:57.316438] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:50.257 [2024-12-16 12:40:57.316450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.257 [2024-12-16 12:40:57.316457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:50.257 [2024-12-16 12:40:57.316465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.725 ms 00:28:50.257 [2024-12-16 12:40:57.316471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.257 [2024-12-16 12:40:57.335614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.257 [2024-12-16 12:40:57.335713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:50.257 [2024-12-16 12:40:57.335726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.114 ms 00:28:50.257 [2024-12-16 12:40:57.335733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.257 [2024-12-16 12:40:57.345321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.257 [2024-12-16 12:40:57.345351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:50.257 [2024-12-16 12:40:57.345359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.560 ms 00:28:50.257 [2024-12-16 12:40:57.345365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.257 [2024-12-16 12:40:57.354373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.257 [2024-12-16 12:40:57.354467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:50.257 [2024-12-16 12:40:57.354479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.982 ms 00:28:50.257 [2024-12-16 12:40:57.354485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.257 [2024-12-16 12:40:57.354945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.257 [2024-12-16 12:40:57.354957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:50.257 [2024-12-16 12:40:57.354966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:28:50.257 [2024-12-16 12:40:57.354973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.519 [2024-12-16 12:40:57.403220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.519 [2024-12-16 12:40:57.403253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:50.519 [2024-12-16 12:40:57.403266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.235 ms 00:28:50.519 [2024-12-16 12:40:57.403274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.519 [2024-12-16 12:40:57.411689] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:50.519 [2024-12-16 12:40:57.413894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.519 [2024-12-16 12:40:57.414002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:50.519 [2024-12-16 12:40:57.414015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.588 ms 00:28:50.519 [2024-12-16 12:40:57.414022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.519 [2024-12-16 12:40:57.414075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.519 [2024-12-16 12:40:57.414083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:50.519 [2024-12-16 12:40:57.414089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:50.519 [2024-12-16 12:40:57.414097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.519 [2024-12-16 12:40:57.415424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.519 [2024-12-16 12:40:57.415449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:50.519 [2024-12-16 12:40:57.415457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.280 ms 00:28:50.519 [2024-12-16 12:40:57.415463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.519 [2024-12-16 12:40:57.415481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.519 [2024-12-16 12:40:57.415488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:50.519 [2024-12-16 12:40:57.415495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:50.519 [2024-12-16 12:40:57.415501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.519 [2024-12-16 12:40:57.415534] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:50.519 [2024-12-16 12:40:57.415542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.519 [2024-12-16 12:40:57.415548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:50.519 [2024-12-16 12:40:57.415554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:50.519 [2024-12-16 12:40:57.415560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.519 [2024-12-16 12:40:57.434085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.519 [2024-12-16 12:40:57.434110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:50.519 [2024-12-16 12:40:57.434122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.511 ms 00:28:50.519 [2024-12-16 12:40:57.434129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.519 [2024-12-16 12:40:57.434196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.519 [2024-12-16 12:40:57.434204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:50.519 [2024-12-16 12:40:57.434211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:50.519 [2024-12-16 12:40:57.434218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.519 [2024-12-16 12:40:57.435169] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 228.098 ms, result 0 00:28:51.903  [2024-12-16T12:40:59.580Z] Copying: 1328/1048576 [kB] (1328 kBps) [2024-12-16T12:41:00.965Z] Copying: 5332/1048576 [kB] (4004 kBps) [2024-12-16T12:41:01.908Z] Copying: 28/1024 [MB] (22 MBps) [2024-12-16T12:41:02.852Z] Copying: 59/1024 [MB] (31 MBps) [2024-12-16T12:41:03.796Z] Copying: 90/1024 [MB] (31 MBps) [2024-12-16T12:41:04.740Z] Copying: 127/1024 [MB] (36 MBps) [2024-12-16T12:41:05.681Z] Copying: 151/1024 [MB] (24 MBps) [2024-12-16T12:41:06.692Z] Copying: 169/1024 [MB] (17 MBps) [2024-12-16T12:41:07.637Z] Copying: 186/1024 [MB] (17 MBps) [2024-12-16T12:41:08.582Z] Copying: 204/1024 [MB] (17 MBps) [2024-12-16T12:41:09.970Z] Copying: 221/1024 [MB] (17 MBps) [2024-12-16T12:41:10.914Z] Copying: 238/1024 [MB] (16 MBps) [2024-12-16T12:41:11.858Z] Copying: 255/1024 [MB] (16 MBps) [2024-12-16T12:41:12.803Z] Copying: 272/1024 [MB] (16 MBps) [2024-12-16T12:41:13.746Z] Copying: 289/1024 [MB] (17 MBps) [2024-12-16T12:41:14.690Z] Copying: 307/1024 [MB] (17 MBps) [2024-12-16T12:41:15.634Z] Copying: 325/1024 [MB] (17 MBps) [2024-12-16T12:41:16.576Z] Copying: 341/1024 [MB] (16 MBps) [2024-12-16T12:41:17.963Z] Copying: 358/1024 [MB] (16 MBps) [2024-12-16T12:41:18.907Z] Copying: 376/1024 [MB] (17 MBps) [2024-12-16T12:41:19.851Z] Copying: 394/1024 [MB] (17 MBps) [2024-12-16T12:41:20.794Z] Copying: 412/1024 [MB] (17 MBps) [2024-12-16T12:41:21.738Z] Copying: 429/1024 [MB] (17 MBps) [2024-12-16T12:41:22.682Z] Copying: 445/1024 [MB] (15 MBps) [2024-12-16T12:41:23.627Z] Copying: 461/1024 [MB] (16 MBps) [2024-12-16T12:41:25.015Z] Copying: 477/1024 [MB] (16 MBps) [2024-12-16T12:41:25.591Z] Copying: 495/1024 [MB] (17 MBps) [2024-12-16T12:41:26.979Z] Copying: 513/1024 [MB] (17 MBps) [2024-12-16T12:41:27.924Z] Copying: 531/1024 [MB] (17 MBps) [2024-12-16T12:41:28.869Z] Copying: 549/1024 [MB] (17 MBps) [2024-12-16T12:41:29.943Z] Copying: 567/1024 [MB] (17 MBps) [2024-12-16T12:41:30.887Z] Copying: 584/1024 [MB] (17 MBps) [2024-12-16T12:41:31.830Z] Copying: 602/1024 [MB] (17 MBps) [2024-12-16T12:41:32.774Z] Copying: 620/1024 [MB] (17 MBps) [2024-12-16T12:41:33.719Z] Copying: 638/1024 [MB] (17 MBps) [2024-12-16T12:41:34.664Z] Copying: 655/1024 [MB] (17 MBps) [2024-12-16T12:41:35.607Z] Copying: 672/1024 [MB] (16 MBps) [2024-12-16T12:41:36.995Z] Copying: 689/1024 [MB] (16 MBps) [2024-12-16T12:41:37.938Z] Copying: 706/1024 [MB] (16 MBps) [2024-12-16T12:41:38.883Z] Copying: 724/1024 [MB] (17 MBps) [2024-12-16T12:41:39.828Z] Copying: 742/1024 [MB] (17 MBps) [2024-12-16T12:41:40.773Z] Copying: 758/1024 [MB] (16 MBps) [2024-12-16T12:41:41.718Z] Copying: 775/1024 [MB] (17 MBps) [2024-12-16T12:41:42.663Z] Copying: 793/1024 [MB] (17 MBps) [2024-12-16T12:41:43.605Z] Copying: 811/1024 [MB] (17 MBps) [2024-12-16T12:41:44.992Z] Copying: 827/1024 [MB] (16 MBps) [2024-12-16T12:41:45.938Z] Copying: 844/1024 [MB] (16 MBps) [2024-12-16T12:41:46.883Z] Copying: 861/1024 [MB] (16 MBps) [2024-12-16T12:41:47.828Z] Copying: 878/1024 [MB] (17 MBps) [2024-12-16T12:41:48.772Z] Copying: 896/1024 [MB] (17 MBps) [2024-12-16T12:41:49.717Z] Copying: 913/1024 [MB] (17 MBps) [2024-12-16T12:41:50.662Z] Copying: 931/1024 [MB] (17 MBps) [2024-12-16T12:41:51.606Z] Copying: 947/1024 [MB] (16 MBps) [2024-12-16T12:41:52.995Z] Copying: 964/1024 [MB] (17 MBps) [2024-12-16T12:41:53.940Z] Copying: 980/1024 [MB] (16 MBps) [2024-12-16T12:41:54.884Z] Copying: 996/1024 [MB] (15 MBps) [2024-12-16T12:41:55.145Z] Copying: 1014/1024 [MB] (17 MBps) [2024-12-16T12:41:55.407Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-16 12:41:55.146835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.301 [2024-12-16 12:41:55.146903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:48.301 [2024-12-16 12:41:55.146918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:48.301 [2024-12-16 12:41:55.146929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.301 [2024-12-16 12:41:55.146954] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:48.301 [2024-12-16 12:41:55.150626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.301 [2024-12-16 12:41:55.150660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:48.301 [2024-12-16 12:41:55.150672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.654 ms 00:29:48.301 [2024-12-16 12:41:55.150682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.301 [2024-12-16 12:41:55.150934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.301 [2024-12-16 12:41:55.150960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:48.301 [2024-12-16 12:41:55.150970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:29:48.301 [2024-12-16 12:41:55.150979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.301 [2024-12-16 12:41:55.162478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.301 [2024-12-16 12:41:55.162510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:48.301 [2024-12-16 12:41:55.162520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.482 ms 00:29:48.301 [2024-12-16 12:41:55.162527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.301 [2024-12-16 12:41:55.167129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.301 [2024-12-16 12:41:55.167152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:48.301 [2024-12-16 12:41:55.167174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.580 ms 00:29:48.301 [2024-12-16 12:41:55.167180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.301 [2024-12-16 12:41:55.186975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.301 [2024-12-16 12:41:55.187003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:48.301 [2024-12-16 12:41:55.187011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.761 ms 00:29:48.301 [2024-12-16 12:41:55.187018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.301 [2024-12-16 12:41:55.198822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.301 [2024-12-16 12:41:55.198850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:48.301 [2024-12-16 12:41:55.198859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.777 ms 00:29:48.301 [2024-12-16 12:41:55.198867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.301 [2024-12-16 12:41:55.204128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.301 [2024-12-16 12:41:55.204259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:48.301 [2024-12-16 12:41:55.204293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.222 ms 00:29:48.301 [2024-12-16 12:41:55.204334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.301 [2024-12-16 12:41:55.233473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.301 [2024-12-16 12:41:55.233505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:48.301 [2024-12-16 12:41:55.233516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.099 ms 00:29:48.301 [2024-12-16 12:41:55.233524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.301 [2024-12-16 12:41:55.257492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.301 [2024-12-16 12:41:55.257522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:48.301 [2024-12-16 12:41:55.257533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.936 ms 00:29:48.301 [2024-12-16 12:41:55.257540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.301 [2024-12-16 12:41:55.280070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.301 [2024-12-16 12:41:55.280099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:48.301 [2024-12-16 12:41:55.280109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.501 ms 00:29:48.302 [2024-12-16 12:41:55.280116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.302 [2024-12-16 12:41:55.303223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.302 [2024-12-16 12:41:55.303386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:48.302 [2024-12-16 12:41:55.303402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.043 ms 00:29:48.302 [2024-12-16 12:41:55.303410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.302 [2024-12-16 12:41:55.303438] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:48.302 [2024-12-16 12:41:55.303451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:48.302 [2024-12-16 12:41:55.303462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:48.302 [2024-12-16 12:41:55.303471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.303992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.304000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.304008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.304016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.304023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.304031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.304038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.304045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.304052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.304059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.304066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.304073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.304080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.304088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:48.302 [2024-12-16 12:41:55.304095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:48.303 [2024-12-16 12:41:55.304103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:48.303 [2024-12-16 12:41:55.304111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:48.303 [2024-12-16 12:41:55.304118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:48.303 [2024-12-16 12:41:55.304125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:48.303 [2024-12-16 12:41:55.304132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:48.303 [2024-12-16 12:41:55.304144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:48.303 [2024-12-16 12:41:55.304153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:48.303 [2024-12-16 12:41:55.304182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:48.303 [2024-12-16 12:41:55.304190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:48.303 [2024-12-16 12:41:55.304198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:48.303 [2024-12-16 12:41:55.304205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:48.303 [2024-12-16 12:41:55.304212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:48.303 [2024-12-16 12:41:55.304220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:48.303 [2024-12-16 12:41:55.304236] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:48.303 [2024-12-16 12:41:55.304244] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 37b5d6ac-e9c9-4bd5-8ad5-f8a6eec59538 00:29:48.303 [2024-12-16 12:41:55.304252] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:48.303 [2024-12-16 12:41:55.304260] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 159680 00:29:48.303 [2024-12-16 12:41:55.304271] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 157696 00:29:48.303 [2024-12-16 12:41:55.304279] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0126 00:29:48.303 [2024-12-16 12:41:55.304286] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:48.303 [2024-12-16 12:41:55.304301] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:48.303 [2024-12-16 12:41:55.304308] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:48.303 [2024-12-16 12:41:55.304314] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:48.303 [2024-12-16 12:41:55.304321] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:48.303 [2024-12-16 12:41:55.304328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.303 [2024-12-16 12:41:55.304335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:48.303 [2024-12-16 12:41:55.304343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.890 ms 00:29:48.303 [2024-12-16 12:41:55.304350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.303 [2024-12-16 12:41:55.317444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.303 [2024-12-16 12:41:55.317472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:48.303 [2024-12-16 12:41:55.317482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.063 ms 00:29:48.303 [2024-12-16 12:41:55.317490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.303 [2024-12-16 12:41:55.317862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.303 [2024-12-16 12:41:55.317872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:48.303 [2024-12-16 12:41:55.317881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:29:48.303 [2024-12-16 12:41:55.317888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.303 [2024-12-16 12:41:55.352932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.303 [2024-12-16 12:41:55.352964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:48.303 [2024-12-16 12:41:55.352975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.303 [2024-12-16 12:41:55.352982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.303 [2024-12-16 12:41:55.353034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.303 [2024-12-16 12:41:55.353043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:48.303 [2024-12-16 12:41:55.353050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.303 [2024-12-16 12:41:55.353058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.303 [2024-12-16 12:41:55.353113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.303 [2024-12-16 12:41:55.353122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:48.303 [2024-12-16 12:41:55.353130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.303 [2024-12-16 12:41:55.353138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.303 [2024-12-16 12:41:55.353170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.303 [2024-12-16 12:41:55.353180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:48.303 [2024-12-16 12:41:55.353188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.303 [2024-12-16 12:41:55.353196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.564 [2024-12-16 12:41:55.435043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.564 [2024-12-16 12:41:55.435092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:48.564 [2024-12-16 12:41:55.435110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.564 [2024-12-16 12:41:55.435118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.564 [2024-12-16 12:41:55.504803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.565 [2024-12-16 12:41:55.504866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:48.565 [2024-12-16 12:41:55.504879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.565 [2024-12-16 12:41:55.504889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.565 [2024-12-16 12:41:55.504987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.565 [2024-12-16 12:41:55.505007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:48.565 [2024-12-16 12:41:55.505017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.565 [2024-12-16 12:41:55.505026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.565 [2024-12-16 12:41:55.505066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.565 [2024-12-16 12:41:55.505076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:48.565 [2024-12-16 12:41:55.505085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.565 [2024-12-16 12:41:55.505093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.565 [2024-12-16 12:41:55.505222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.565 [2024-12-16 12:41:55.505235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:48.565 [2024-12-16 12:41:55.505250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.565 [2024-12-16 12:41:55.505258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.565 [2024-12-16 12:41:55.505294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.565 [2024-12-16 12:41:55.505307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:48.565 [2024-12-16 12:41:55.505356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.565 [2024-12-16 12:41:55.505366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.565 [2024-12-16 12:41:55.505415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.565 [2024-12-16 12:41:55.505426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:48.565 [2024-12-16 12:41:55.505443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.565 [2024-12-16 12:41:55.505451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.565 [2024-12-16 12:41:55.505506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.565 [2024-12-16 12:41:55.505519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:48.565 [2024-12-16 12:41:55.505529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.565 [2024-12-16 12:41:55.505538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.565 [2024-12-16 12:41:55.505699] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 358.826 ms, result 0 00:29:49.136 00:29:49.136 00:29:49.136 12:41:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:51.692 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:51.692 12:41:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:51.692 [2024-12-16 12:41:58.466960] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:51.692 [2024-12-16 12:41:58.467078] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84858 ] 00:29:51.692 [2024-12-16 12:41:58.623704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.692 [2024-12-16 12:41:58.714798] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.955 [2024-12-16 12:41:58.947785] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:51.955 [2024-12-16 12:41:58.947844] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:52.216 [2024-12-16 12:41:59.104272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.216 [2024-12-16 12:41:59.104444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:52.216 [2024-12-16 12:41:59.104461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:52.216 [2024-12-16 12:41:59.104468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.216 [2024-12-16 12:41:59.104514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.216 [2024-12-16 12:41:59.104524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:52.216 [2024-12-16 12:41:59.104531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:29:52.216 [2024-12-16 12:41:59.104537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.216 [2024-12-16 12:41:59.104554] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:52.216 [2024-12-16 12:41:59.105103] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:52.217 [2024-12-16 12:41:59.105117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.217 [2024-12-16 12:41:59.105123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:52.217 [2024-12-16 12:41:59.105130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:29:52.217 [2024-12-16 12:41:59.105135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.217 [2024-12-16 12:41:59.106458] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:52.217 [2024-12-16 12:41:59.116854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.217 [2024-12-16 12:41:59.116884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:52.217 [2024-12-16 12:41:59.116894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.397 ms 00:29:52.217 [2024-12-16 12:41:59.116900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.217 [2024-12-16 12:41:59.116949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.217 [2024-12-16 12:41:59.116957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:52.217 [2024-12-16 12:41:59.116965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:52.217 [2024-12-16 12:41:59.116971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.217 [2024-12-16 12:41:59.123231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.217 [2024-12-16 12:41:59.123365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:52.217 [2024-12-16 12:41:59.123378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.219 ms 00:29:52.217 [2024-12-16 12:41:59.123387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.217 [2024-12-16 12:41:59.123445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.217 [2024-12-16 12:41:59.123452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:52.217 [2024-12-16 12:41:59.123458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:29:52.217 [2024-12-16 12:41:59.123464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.217 [2024-12-16 12:41:59.123502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.217 [2024-12-16 12:41:59.123510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:52.217 [2024-12-16 12:41:59.123516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:52.217 [2024-12-16 12:41:59.123522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.217 [2024-12-16 12:41:59.123539] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:52.217 [2024-12-16 12:41:59.126595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.217 [2024-12-16 12:41:59.126690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:52.217 [2024-12-16 12:41:59.126707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.060 ms 00:29:52.217 [2024-12-16 12:41:59.126714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.217 [2024-12-16 12:41:59.126745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.217 [2024-12-16 12:41:59.126752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:52.217 [2024-12-16 12:41:59.126759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:52.217 [2024-12-16 12:41:59.126765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.217 [2024-12-16 12:41:59.126779] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:52.217 [2024-12-16 12:41:59.126799] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:52.217 [2024-12-16 12:41:59.126827] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:52.217 [2024-12-16 12:41:59.126840] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:52.217 [2024-12-16 12:41:59.126923] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:52.217 [2024-12-16 12:41:59.126931] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:52.217 [2024-12-16 12:41:59.126939] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:52.217 [2024-12-16 12:41:59.126946] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:52.217 [2024-12-16 12:41:59.126953] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:52.217 [2024-12-16 12:41:59.126959] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:52.217 [2024-12-16 12:41:59.126966] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:52.217 [2024-12-16 12:41:59.126971] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:52.217 [2024-12-16 12:41:59.126980] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:52.217 [2024-12-16 12:41:59.126985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.217 [2024-12-16 12:41:59.126991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:52.217 [2024-12-16 12:41:59.126997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:29:52.217 [2024-12-16 12:41:59.127003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.217 [2024-12-16 12:41:59.127066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.217 [2024-12-16 12:41:59.127073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:52.217 [2024-12-16 12:41:59.127079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:29:52.217 [2024-12-16 12:41:59.127084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.217 [2024-12-16 12:41:59.127174] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:52.217 [2024-12-16 12:41:59.127184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:52.217 [2024-12-16 12:41:59.127191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:52.217 [2024-12-16 12:41:59.127196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:52.217 [2024-12-16 12:41:59.127202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:52.217 [2024-12-16 12:41:59.127209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:52.217 [2024-12-16 12:41:59.127216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:52.217 [2024-12-16 12:41:59.127221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:52.217 [2024-12-16 12:41:59.127228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:52.217 [2024-12-16 12:41:59.127235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:52.217 [2024-12-16 12:41:59.127241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:52.217 [2024-12-16 12:41:59.127247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:52.217 [2024-12-16 12:41:59.127252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:52.217 [2024-12-16 12:41:59.127262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:52.217 [2024-12-16 12:41:59.127268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:52.217 [2024-12-16 12:41:59.127273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:52.217 [2024-12-16 12:41:59.127278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:52.217 [2024-12-16 12:41:59.127283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:52.217 [2024-12-16 12:41:59.127288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:52.217 [2024-12-16 12:41:59.127293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:52.217 [2024-12-16 12:41:59.127300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:52.217 [2024-12-16 12:41:59.127305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:52.217 [2024-12-16 12:41:59.127310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:52.217 [2024-12-16 12:41:59.127315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:52.217 [2024-12-16 12:41:59.127321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:52.217 [2024-12-16 12:41:59.127326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:52.217 [2024-12-16 12:41:59.127331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:52.217 [2024-12-16 12:41:59.127336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:52.217 [2024-12-16 12:41:59.127341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:52.217 [2024-12-16 12:41:59.127347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:52.218 [2024-12-16 12:41:59.127351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:52.218 [2024-12-16 12:41:59.127363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:52.218 [2024-12-16 12:41:59.127369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:52.218 [2024-12-16 12:41:59.127374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:52.218 [2024-12-16 12:41:59.127379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:52.218 [2024-12-16 12:41:59.127384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:52.218 [2024-12-16 12:41:59.127390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:52.218 [2024-12-16 12:41:59.127395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:52.218 [2024-12-16 12:41:59.127400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:52.218 [2024-12-16 12:41:59.127404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:52.218 [2024-12-16 12:41:59.127409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:52.218 [2024-12-16 12:41:59.127415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:52.218 [2024-12-16 12:41:59.127420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:52.218 [2024-12-16 12:41:59.127425] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:52.218 [2024-12-16 12:41:59.127431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:52.218 [2024-12-16 12:41:59.127437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:52.218 [2024-12-16 12:41:59.127443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:52.218 [2024-12-16 12:41:59.127449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:52.218 [2024-12-16 12:41:59.127455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:52.218 [2024-12-16 12:41:59.127460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:52.218 [2024-12-16 12:41:59.127465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:52.218 [2024-12-16 12:41:59.127470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:52.218 [2024-12-16 12:41:59.127475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:52.218 [2024-12-16 12:41:59.127482] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:52.218 [2024-12-16 12:41:59.127490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:52.218 [2024-12-16 12:41:59.127499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:52.218 [2024-12-16 12:41:59.127504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:52.218 [2024-12-16 12:41:59.127509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:52.218 [2024-12-16 12:41:59.127514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:52.218 [2024-12-16 12:41:59.127519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:52.218 [2024-12-16 12:41:59.127524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:52.218 [2024-12-16 12:41:59.127530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:52.218 [2024-12-16 12:41:59.127536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:52.218 [2024-12-16 12:41:59.127541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:52.218 [2024-12-16 12:41:59.127546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:52.218 [2024-12-16 12:41:59.127551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:52.218 [2024-12-16 12:41:59.127556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:52.218 [2024-12-16 12:41:59.127561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:52.218 [2024-12-16 12:41:59.127566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:52.218 [2024-12-16 12:41:59.127571] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:52.218 [2024-12-16 12:41:59.127578] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:52.218 [2024-12-16 12:41:59.127584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:52.218 [2024-12-16 12:41:59.127589] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:52.218 [2024-12-16 12:41:59.127596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:52.218 [2024-12-16 12:41:59.127603] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:52.218 [2024-12-16 12:41:59.127609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.218 [2024-12-16 12:41:59.127615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:52.218 [2024-12-16 12:41:59.127621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:29:52.218 [2024-12-16 12:41:59.127627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.218 [2024-12-16 12:41:59.151935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.218 [2024-12-16 12:41:59.151964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:52.218 [2024-12-16 12:41:59.151973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.264 ms 00:29:52.218 [2024-12-16 12:41:59.151982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.218 [2024-12-16 12:41:59.152048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.218 [2024-12-16 12:41:59.152055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:52.218 [2024-12-16 12:41:59.152061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:29:52.218 [2024-12-16 12:41:59.152067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.218 [2024-12-16 12:41:59.192408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.218 [2024-12-16 12:41:59.192451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:52.218 [2024-12-16 12:41:59.192462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.300 ms 00:29:52.218 [2024-12-16 12:41:59.192469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.218 [2024-12-16 12:41:59.192502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.218 [2024-12-16 12:41:59.192510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:52.218 [2024-12-16 12:41:59.192519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:52.218 [2024-12-16 12:41:59.192525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.218 [2024-12-16 12:41:59.192938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.218 [2024-12-16 12:41:59.192953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:52.218 [2024-12-16 12:41:59.192961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:29:52.218 [2024-12-16 12:41:59.192967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.218 [2024-12-16 12:41:59.193081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.218 [2024-12-16 12:41:59.193089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:52.218 [2024-12-16 12:41:59.193096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:29:52.218 [2024-12-16 12:41:59.193104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.218 [2024-12-16 12:41:59.205085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.218 [2024-12-16 12:41:59.205112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:52.218 [2024-12-16 12:41:59.205123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.964 ms 00:29:52.218 [2024-12-16 12:41:59.205130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.218 [2024-12-16 12:41:59.215926] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:52.218 [2024-12-16 12:41:59.216062] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:52.218 [2024-12-16 12:41:59.216074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.218 [2024-12-16 12:41:59.216081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:52.218 [2024-12-16 12:41:59.216089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.840 ms 00:29:52.218 [2024-12-16 12:41:59.216096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.218 [2024-12-16 12:41:59.234730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.218 [2024-12-16 12:41:59.234826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:52.218 [2024-12-16 12:41:59.234840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.606 ms 00:29:52.218 [2024-12-16 12:41:59.234847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.218 [2024-12-16 12:41:59.244131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.218 [2024-12-16 12:41:59.244168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:52.218 [2024-12-16 12:41:59.244176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.249 ms 00:29:52.218 [2024-12-16 12:41:59.244182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.218 [2024-12-16 12:41:59.253149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.218 [2024-12-16 12:41:59.253179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:52.218 [2024-12-16 12:41:59.253187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.942 ms 00:29:52.218 [2024-12-16 12:41:59.253192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.218 [2024-12-16 12:41:59.253682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.218 [2024-12-16 12:41:59.253699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:52.218 [2024-12-16 12:41:59.253709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:29:52.218 [2024-12-16 12:41:59.253715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.218 [2024-12-16 12:41:59.302211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.218 [2024-12-16 12:41:59.302247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:52.218 [2024-12-16 12:41:59.302264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.481 ms 00:29:52.218 [2024-12-16 12:41:59.302271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.218 [2024-12-16 12:41:59.310946] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:52.219 [2024-12-16 12:41:59.313260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.219 [2024-12-16 12:41:59.313284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:52.219 [2024-12-16 12:41:59.313294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.954 ms 00:29:52.219 [2024-12-16 12:41:59.313301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.219 [2024-12-16 12:41:59.313369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.219 [2024-12-16 12:41:59.313377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:52.219 [2024-12-16 12:41:59.313385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:52.219 [2024-12-16 12:41:59.313393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.219 [2024-12-16 12:41:59.314047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.219 [2024-12-16 12:41:59.314070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:52.219 [2024-12-16 12:41:59.314078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:29:52.219 [2024-12-16 12:41:59.314084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.219 [2024-12-16 12:41:59.314103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.219 [2024-12-16 12:41:59.314110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:52.219 [2024-12-16 12:41:59.314117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:52.219 [2024-12-16 12:41:59.314123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.219 [2024-12-16 12:41:59.314167] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:52.219 [2024-12-16 12:41:59.314176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.219 [2024-12-16 12:41:59.314183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:52.219 [2024-12-16 12:41:59.314189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:52.219 [2024-12-16 12:41:59.314196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.480 [2024-12-16 12:41:59.332785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.480 [2024-12-16 12:41:59.332917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:52.480 [2024-12-16 12:41:59.332936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.575 ms 00:29:52.480 [2024-12-16 12:41:59.332942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.480 [2024-12-16 12:41:59.332996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.480 [2024-12-16 12:41:59.333004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:52.480 [2024-12-16 12:41:59.333010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:29:52.480 [2024-12-16 12:41:59.333017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.480 [2024-12-16 12:41:59.334065] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 229.388 ms, result 0 00:29:53.424  [2024-12-16T12:42:01.475Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-16T12:42:02.863Z] Copying: 23/1024 [MB] (11 MBps) [2024-12-16T12:42:03.807Z] Copying: 34/1024 [MB] (10 MBps) [2024-12-16T12:42:04.780Z] Copying: 46/1024 [MB] (11 MBps) [2024-12-16T12:42:05.752Z] Copying: 58/1024 [MB] (11 MBps) [2024-12-16T12:42:06.696Z] Copying: 68/1024 [MB] (10 MBps) [2024-12-16T12:42:07.640Z] Copying: 80/1024 [MB] (11 MBps) [2024-12-16T12:42:08.585Z] Copying: 91/1024 [MB] (11 MBps) [2024-12-16T12:42:09.528Z] Copying: 103/1024 [MB] (11 MBps) [2024-12-16T12:42:10.472Z] Copying: 114/1024 [MB] (11 MBps) [2024-12-16T12:42:11.860Z] Copying: 126/1024 [MB] (11 MBps) [2024-12-16T12:42:12.803Z] Copying: 137/1024 [MB] (11 MBps) [2024-12-16T12:42:13.748Z] Copying: 149/1024 [MB] (12 MBps) [2024-12-16T12:42:14.692Z] Copying: 160/1024 [MB] (10 MBps) [2024-12-16T12:42:15.637Z] Copying: 172/1024 [MB] (11 MBps) [2024-12-16T12:42:16.581Z] Copying: 184/1024 [MB] (11 MBps) [2024-12-16T12:42:17.525Z] Copying: 196/1024 [MB] (11 MBps) [2024-12-16T12:42:18.910Z] Copying: 207/1024 [MB] (11 MBps) [2024-12-16T12:42:19.482Z] Copying: 219/1024 [MB] (11 MBps) [2024-12-16T12:42:20.868Z] Copying: 231/1024 [MB] (12 MBps) [2024-12-16T12:42:21.812Z] Copying: 243/1024 [MB] (11 MBps) [2024-12-16T12:42:22.756Z] Copying: 254/1024 [MB] (11 MBps) [2024-12-16T12:42:23.708Z] Copying: 268/1024 [MB] (13 MBps) [2024-12-16T12:42:24.652Z] Copying: 278/1024 [MB] (10 MBps) [2024-12-16T12:42:25.594Z] Copying: 289/1024 [MB] (10 MBps) [2024-12-16T12:42:26.537Z] Copying: 301/1024 [MB] (11 MBps) [2024-12-16T12:42:27.482Z] Copying: 312/1024 [MB] (11 MBps) [2024-12-16T12:42:28.870Z] Copying: 324/1024 [MB] (11 MBps) [2024-12-16T12:42:29.816Z] Copying: 340/1024 [MB] (16 MBps) [2024-12-16T12:42:30.794Z] Copying: 350/1024 [MB] (10 MBps) [2024-12-16T12:42:31.745Z] Copying: 361/1024 [MB] (10 MBps) [2024-12-16T12:42:32.690Z] Copying: 372/1024 [MB] (11 MBps) [2024-12-16T12:42:33.635Z] Copying: 384/1024 [MB] (11 MBps) [2024-12-16T12:42:34.578Z] Copying: 395/1024 [MB] (11 MBps) [2024-12-16T12:42:35.522Z] Copying: 408/1024 [MB] (12 MBps) [2024-12-16T12:42:36.909Z] Copying: 419/1024 [MB] (11 MBps) [2024-12-16T12:42:37.482Z] Copying: 431/1024 [MB] (11 MBps) [2024-12-16T12:42:38.870Z] Copying: 442/1024 [MB] (11 MBps) [2024-12-16T12:42:39.814Z] Copying: 452/1024 [MB] (10 MBps) [2024-12-16T12:42:40.757Z] Copying: 464/1024 [MB] (11 MBps) [2024-12-16T12:42:41.700Z] Copying: 475/1024 [MB] (11 MBps) [2024-12-16T12:42:42.643Z] Copying: 486/1024 [MB] (11 MBps) [2024-12-16T12:42:43.587Z] Copying: 498/1024 [MB] (11 MBps) [2024-12-16T12:42:44.531Z] Copying: 509/1024 [MB] (10 MBps) [2024-12-16T12:42:45.474Z] Copying: 520/1024 [MB] (11 MBps) [2024-12-16T12:42:46.861Z] Copying: 531/1024 [MB] (11 MBps) [2024-12-16T12:42:47.805Z] Copying: 543/1024 [MB] (11 MBps) [2024-12-16T12:42:48.747Z] Copying: 554/1024 [MB] (11 MBps) [2024-12-16T12:42:49.691Z] Copying: 565/1024 [MB] (11 MBps) [2024-12-16T12:42:50.638Z] Copying: 577/1024 [MB] (11 MBps) [2024-12-16T12:42:51.580Z] Copying: 590/1024 [MB] (13 MBps) [2024-12-16T12:42:52.523Z] Copying: 602/1024 [MB] (11 MBps) [2024-12-16T12:42:53.909Z] Copying: 614/1024 [MB] (12 MBps) [2024-12-16T12:42:54.482Z] Copying: 625/1024 [MB] (10 MBps) [2024-12-16T12:42:55.870Z] Copying: 635/1024 [MB] (10 MBps) [2024-12-16T12:42:56.815Z] Copying: 646/1024 [MB] (10 MBps) [2024-12-16T12:42:57.759Z] Copying: 657/1024 [MB] (11 MBps) [2024-12-16T12:42:58.703Z] Copying: 669/1024 [MB] (11 MBps) [2024-12-16T12:42:59.646Z] Copying: 680/1024 [MB] (11 MBps) [2024-12-16T12:43:00.609Z] Copying: 692/1024 [MB] (11 MBps) [2024-12-16T12:43:01.593Z] Copying: 703/1024 [MB] (11 MBps) [2024-12-16T12:43:02.536Z] Copying: 714/1024 [MB] (11 MBps) [2024-12-16T12:43:03.482Z] Copying: 725/1024 [MB] (11 MBps) [2024-12-16T12:43:04.871Z] Copying: 736/1024 [MB] (10 MBps) [2024-12-16T12:43:05.815Z] Copying: 747/1024 [MB] (11 MBps) [2024-12-16T12:43:06.759Z] Copying: 757/1024 [MB] (10 MBps) [2024-12-16T12:43:07.702Z] Copying: 769/1024 [MB] (11 MBps) [2024-12-16T12:43:08.646Z] Copying: 781/1024 [MB] (11 MBps) [2024-12-16T12:43:09.590Z] Copying: 792/1024 [MB] (11 MBps) [2024-12-16T12:43:10.534Z] Copying: 804/1024 [MB] (11 MBps) [2024-12-16T12:43:11.478Z] Copying: 816/1024 [MB] (11 MBps) [2024-12-16T12:43:12.866Z] Copying: 826/1024 [MB] (10 MBps) [2024-12-16T12:43:13.805Z] Copying: 838/1024 [MB] (11 MBps) [2024-12-16T12:43:14.750Z] Copying: 849/1024 [MB] (10 MBps) [2024-12-16T12:43:15.690Z] Copying: 860/1024 [MB] (11 MBps) [2024-12-16T12:43:16.635Z] Copying: 872/1024 [MB] (11 MBps) [2024-12-16T12:43:17.577Z] Copying: 884/1024 [MB] (11 MBps) [2024-12-16T12:43:18.521Z] Copying: 895/1024 [MB] (11 MBps) [2024-12-16T12:43:19.910Z] Copying: 907/1024 [MB] (11 MBps) [2024-12-16T12:43:20.483Z] Copying: 918/1024 [MB] (11 MBps) [2024-12-16T12:43:21.870Z] Copying: 930/1024 [MB] (11 MBps) [2024-12-16T12:43:22.815Z] Copying: 942/1024 [MB] (11 MBps) [2024-12-16T12:43:23.759Z] Copying: 953/1024 [MB] (11 MBps) [2024-12-16T12:43:24.704Z] Copying: 965/1024 [MB] (11 MBps) [2024-12-16T12:43:25.648Z] Copying: 976/1024 [MB] (10 MBps) [2024-12-16T12:43:26.592Z] Copying: 987/1024 [MB] (11 MBps) [2024-12-16T12:43:27.537Z] Copying: 999/1024 [MB] (11 MBps) [2024-12-16T12:43:28.482Z] Copying: 1009/1024 [MB] (10 MBps) [2024-12-16T12:43:28.743Z] Copying: 1021/1024 [MB] (11 MBps) [2024-12-16T12:43:29.380Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-12-16 12:43:29.086662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.274 [2024-12-16 12:43:29.086736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:22.274 [2024-12-16 12:43:29.086751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:22.274 [2024-12-16 12:43:29.086758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.274 [2024-12-16 12:43:29.086777] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:22.274 [2024-12-16 12:43:29.089132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.274 [2024-12-16 12:43:29.089352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:22.274 [2024-12-16 12:43:29.089371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.341 ms 00:31:22.274 [2024-12-16 12:43:29.089379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.274 [2024-12-16 12:43:29.089572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.274 [2024-12-16 12:43:29.089580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:22.274 [2024-12-16 12:43:29.089588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:31:22.274 [2024-12-16 12:43:29.089595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.274 [2024-12-16 12:43:29.092696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.274 [2024-12-16 12:43:29.092715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:22.274 [2024-12-16 12:43:29.092723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.091 ms 00:31:22.274 [2024-12-16 12:43:29.092733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.274 [2024-12-16 12:43:29.097751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.274 [2024-12-16 12:43:29.097852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:22.274 [2024-12-16 12:43:29.097905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.004 ms 00:31:22.274 [2024-12-16 12:43:29.097925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.274 [2024-12-16 12:43:29.118129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.274 [2024-12-16 12:43:29.118254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:22.274 [2024-12-16 12:43:29.118306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.140 ms 00:31:22.274 [2024-12-16 12:43:29.118324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.274 [2024-12-16 12:43:29.131194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.274 [2024-12-16 12:43:29.131295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:22.274 [2024-12-16 12:43:29.131341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.833 ms 00:31:22.274 [2024-12-16 12:43:29.131360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.274 [2024-12-16 12:43:29.133617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.274 [2024-12-16 12:43:29.133643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:22.274 [2024-12-16 12:43:29.133651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.215 ms 00:31:22.274 [2024-12-16 12:43:29.133658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.274 [2024-12-16 12:43:29.153175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.274 [2024-12-16 12:43:29.153205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:22.274 [2024-12-16 12:43:29.153214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.506 ms 00:31:22.274 [2024-12-16 12:43:29.153220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.274 [2024-12-16 12:43:29.171567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.274 [2024-12-16 12:43:29.171592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:22.274 [2024-12-16 12:43:29.171600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.321 ms 00:31:22.274 [2024-12-16 12:43:29.171606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.274 [2024-12-16 12:43:29.189967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.274 [2024-12-16 12:43:29.189991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:22.274 [2024-12-16 12:43:29.189999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.335 ms 00:31:22.274 [2024-12-16 12:43:29.190006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.274 [2024-12-16 12:43:29.207521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.274 [2024-12-16 12:43:29.207545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:22.274 [2024-12-16 12:43:29.207553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.454 ms 00:31:22.274 [2024-12-16 12:43:29.207559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.274 [2024-12-16 12:43:29.207584] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:22.274 [2024-12-16 12:43:29.207601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:22.274 [2024-12-16 12:43:29.207611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:31:22.274 [2024-12-16 12:43:29.207618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.207994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.208000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.208005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.208011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.208016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.208022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.208027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.208033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.208038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.208046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:22.274 [2024-12-16 12:43:29.208052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:22.275 [2024-12-16 12:43:29.208058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:22.275 [2024-12-16 12:43:29.208063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:22.275 [2024-12-16 12:43:29.208068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:22.275 [2024-12-16 12:43:29.208073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:22.275 [2024-12-16 12:43:29.208079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:22.275 [2024-12-16 12:43:29.208084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:22.275 [2024-12-16 12:43:29.208091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:22.275 [2024-12-16 12:43:29.208097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:22.275 [2024-12-16 12:43:29.208102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:22.275 [2024-12-16 12:43:29.208108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:22.275 [2024-12-16 12:43:29.208114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:22.275 [2024-12-16 12:43:29.208119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:22.275 [2024-12-16 12:43:29.208124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:22.275 [2024-12-16 12:43:29.208130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:22.275 [2024-12-16 12:43:29.208135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:22.275 [2024-12-16 12:43:29.208142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:22.275 [2024-12-16 12:43:29.208148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:22.275 [2024-12-16 12:43:29.208154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:22.275 [2024-12-16 12:43:29.208174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:22.275 [2024-12-16 12:43:29.208180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:22.275 [2024-12-16 12:43:29.208185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:22.275 [2024-12-16 12:43:29.208192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:22.275 [2024-12-16 12:43:29.208204] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:22.275 [2024-12-16 12:43:29.208211] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 37b5d6ac-e9c9-4bd5-8ad5-f8a6eec59538 00:31:22.275 [2024-12-16 12:43:29.208218] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:31:22.275 [2024-12-16 12:43:29.208224] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:22.275 [2024-12-16 12:43:29.208230] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:22.275 [2024-12-16 12:43:29.208237] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:22.275 [2024-12-16 12:43:29.208248] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:22.275 [2024-12-16 12:43:29.208255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:22.275 [2024-12-16 12:43:29.208261] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:22.275 [2024-12-16 12:43:29.208266] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:22.275 [2024-12-16 12:43:29.208271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:22.275 [2024-12-16 12:43:29.208276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.275 [2024-12-16 12:43:29.208282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:22.275 [2024-12-16 12:43:29.208289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:31:22.275 [2024-12-16 12:43:29.208297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.275 [2024-12-16 12:43:29.218390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.275 [2024-12-16 12:43:29.218412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:22.275 [2024-12-16 12:43:29.218421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.081 ms 00:31:22.275 [2024-12-16 12:43:29.218427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.275 [2024-12-16 12:43:29.218718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.275 [2024-12-16 12:43:29.218731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:22.275 [2024-12-16 12:43:29.218737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:31:22.275 [2024-12-16 12:43:29.218744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.275 [2024-12-16 12:43:29.246110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.275 [2024-12-16 12:43:29.246136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:22.275 [2024-12-16 12:43:29.246143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.275 [2024-12-16 12:43:29.246150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.275 [2024-12-16 12:43:29.246200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.275 [2024-12-16 12:43:29.246211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:22.275 [2024-12-16 12:43:29.246217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.275 [2024-12-16 12:43:29.246223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.275 [2024-12-16 12:43:29.246266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.275 [2024-12-16 12:43:29.246274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:22.275 [2024-12-16 12:43:29.246280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.275 [2024-12-16 12:43:29.246286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.275 [2024-12-16 12:43:29.246298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.275 [2024-12-16 12:43:29.246306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:22.275 [2024-12-16 12:43:29.246315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.275 [2024-12-16 12:43:29.246321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.275 [2024-12-16 12:43:29.310141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.275 [2024-12-16 12:43:29.310184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:22.275 [2024-12-16 12:43:29.310194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.275 [2024-12-16 12:43:29.310200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.275 [2024-12-16 12:43:29.361772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.275 [2024-12-16 12:43:29.361815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:22.275 [2024-12-16 12:43:29.361823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.275 [2024-12-16 12:43:29.361830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.275 [2024-12-16 12:43:29.361901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.275 [2024-12-16 12:43:29.361909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:22.275 [2024-12-16 12:43:29.361916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.275 [2024-12-16 12:43:29.361922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.275 [2024-12-16 12:43:29.361951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.275 [2024-12-16 12:43:29.361959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:22.275 [2024-12-16 12:43:29.361965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.275 [2024-12-16 12:43:29.361974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.275 [2024-12-16 12:43:29.362048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.275 [2024-12-16 12:43:29.362056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:22.275 [2024-12-16 12:43:29.362063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.275 [2024-12-16 12:43:29.362069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.275 [2024-12-16 12:43:29.362094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.275 [2024-12-16 12:43:29.362102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:22.275 [2024-12-16 12:43:29.362108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.275 [2024-12-16 12:43:29.362114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.275 [2024-12-16 12:43:29.362152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.275 [2024-12-16 12:43:29.362176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:22.275 [2024-12-16 12:43:29.362183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.275 [2024-12-16 12:43:29.362190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.275 [2024-12-16 12:43:29.362228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.275 [2024-12-16 12:43:29.362236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:22.275 [2024-12-16 12:43:29.362242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.275 [2024-12-16 12:43:29.362250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.275 [2024-12-16 12:43:29.362359] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 275.678 ms, result 0 00:31:22.845 00:31:22.845 00:31:23.106 12:43:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:31:25.020 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:31:25.020 12:43:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:31:25.020 12:43:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:31:25.020 12:43:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:25.020 12:43:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:25.020 12:43:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:31:25.280 12:43:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:25.280 12:43:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:31:25.280 Process with pid 82645 is not found 00:31:25.280 12:43:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 82645 00:31:25.280 12:43:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82645 ']' 00:31:25.280 12:43:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 82645 00:31:25.280 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (82645) - No such process 00:31:25.280 12:43:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 82645 is not found' 00:31:25.280 12:43:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:31:25.541 Remove shared memory files 00:31:25.541 12:43:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:31:25.541 12:43:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:25.541 12:43:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:25.541 12:43:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:25.541 12:43:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:31:25.541 12:43:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:25.541 12:43:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:25.541 ************************************ 00:31:25.541 END TEST ftl_dirty_shutdown 00:31:25.541 ************************************ 00:31:25.541 00:31:25.541 real 5m4.635s 00:31:25.541 user 5m20.073s 00:31:25.541 sys 0m23.242s 00:31:25.541 12:43:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:25.541 12:43:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:25.541 12:43:32 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:31:25.541 12:43:32 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:25.541 12:43:32 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:25.541 12:43:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:25.541 ************************************ 00:31:25.541 START TEST ftl_upgrade_shutdown 00:31:25.541 ************************************ 00:31:25.541 12:43:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:31:25.541 * Looking for test storage... 00:31:25.541 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:25.542 12:43:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:25.542 12:43:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:31:25.542 12:43:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:25.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.803 --rc genhtml_branch_coverage=1 00:31:25.803 --rc genhtml_function_coverage=1 00:31:25.803 --rc genhtml_legend=1 00:31:25.803 --rc geninfo_all_blocks=1 00:31:25.803 --rc geninfo_unexecuted_blocks=1 00:31:25.803 00:31:25.803 ' 00:31:25.803 12:43:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:25.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.803 --rc genhtml_branch_coverage=1 00:31:25.803 --rc genhtml_function_coverage=1 00:31:25.803 --rc genhtml_legend=1 00:31:25.803 --rc geninfo_all_blocks=1 00:31:25.803 --rc geninfo_unexecuted_blocks=1 00:31:25.803 00:31:25.803 ' 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:25.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.804 --rc genhtml_branch_coverage=1 00:31:25.804 --rc genhtml_function_coverage=1 00:31:25.804 --rc genhtml_legend=1 00:31:25.804 --rc geninfo_all_blocks=1 00:31:25.804 --rc geninfo_unexecuted_blocks=1 00:31:25.804 00:31:25.804 ' 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:25.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.804 --rc genhtml_branch_coverage=1 00:31:25.804 --rc genhtml_function_coverage=1 00:31:25.804 --rc genhtml_legend=1 00:31:25.804 --rc geninfo_all_blocks=1 00:31:25.804 --rc geninfo_unexecuted_blocks=1 00:31:25.804 00:31:25.804 ' 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:31:25.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85872 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85872 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85872 ']' 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:25.804 12:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:31:25.804 [2024-12-16 12:43:32.758023] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:25.804 [2024-12-16 12:43:32.758132] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85872 ] 00:31:26.065 [2024-12-16 12:43:32.914234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.065 [2024-12-16 12:43:33.008630] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.638 12:43:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:26.638 12:43:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:26.638 12:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:26.638 12:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:31:26.638 12:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:31:26.638 12:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:26.638 12:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:31:26.638 12:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:26.638 12:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:31:26.638 12:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:26.638 12:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:31:26.638 12:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:26.638 12:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:31:26.638 12:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:26.638 12:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:31:26.638 12:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:26.638 12:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:31:26.638 12:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:31:26.638 12:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:31:26.638 12:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:26.638 12:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:31:26.638 12:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:31:26.638 12:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:31:26.899 12:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:31:26.899 12:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:31:26.899 12:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:31:26.899 12:43:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:31:26.899 12:43:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:26.899 12:43:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:26.899 12:43:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:26.899 12:43:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:31:27.161 12:43:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:27.161 { 00:31:27.161 "name": "basen1", 00:31:27.161 "aliases": [ 00:31:27.161 "65d7da81-1d0d-4c6f-95af-559504360eca" 00:31:27.161 ], 00:31:27.161 "product_name": "NVMe disk", 00:31:27.161 "block_size": 4096, 00:31:27.161 "num_blocks": 1310720, 00:31:27.161 "uuid": "65d7da81-1d0d-4c6f-95af-559504360eca", 00:31:27.161 "numa_id": -1, 00:31:27.161 "assigned_rate_limits": { 00:31:27.161 "rw_ios_per_sec": 0, 00:31:27.161 "rw_mbytes_per_sec": 0, 00:31:27.161 "r_mbytes_per_sec": 0, 00:31:27.161 "w_mbytes_per_sec": 0 00:31:27.161 }, 00:31:27.161 "claimed": true, 00:31:27.161 "claim_type": "read_many_write_one", 00:31:27.161 "zoned": false, 00:31:27.161 "supported_io_types": { 00:31:27.161 "read": true, 00:31:27.161 "write": true, 00:31:27.161 "unmap": true, 00:31:27.161 "flush": true, 00:31:27.161 "reset": true, 00:31:27.161 "nvme_admin": true, 00:31:27.161 "nvme_io": true, 00:31:27.161 "nvme_io_md": false, 00:31:27.161 "write_zeroes": true, 00:31:27.161 "zcopy": false, 00:31:27.161 "get_zone_info": false, 00:31:27.161 "zone_management": false, 00:31:27.161 "zone_append": false, 00:31:27.161 "compare": true, 00:31:27.161 "compare_and_write": false, 00:31:27.161 "abort": true, 00:31:27.161 "seek_hole": false, 00:31:27.161 "seek_data": false, 00:31:27.161 "copy": true, 00:31:27.161 "nvme_iov_md": false 00:31:27.161 }, 00:31:27.161 "driver_specific": { 00:31:27.161 "nvme": [ 00:31:27.161 { 00:31:27.161 "pci_address": "0000:00:11.0", 00:31:27.161 "trid": { 00:31:27.161 "trtype": "PCIe", 00:31:27.161 "traddr": "0000:00:11.0" 00:31:27.161 }, 00:31:27.161 "ctrlr_data": { 00:31:27.161 "cntlid": 0, 00:31:27.161 "vendor_id": "0x1b36", 00:31:27.161 "model_number": "QEMU NVMe Ctrl", 00:31:27.161 "serial_number": "12341", 00:31:27.161 "firmware_revision": "8.0.0", 00:31:27.161 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:27.161 "oacs": { 00:31:27.161 "security": 0, 00:31:27.161 "format": 1, 00:31:27.161 "firmware": 0, 00:31:27.161 "ns_manage": 1 00:31:27.161 }, 00:31:27.161 "multi_ctrlr": false, 00:31:27.161 "ana_reporting": false 00:31:27.161 }, 00:31:27.161 "vs": { 00:31:27.161 "nvme_version": "1.4" 00:31:27.161 }, 00:31:27.161 "ns_data": { 00:31:27.161 "id": 1, 00:31:27.161 "can_share": false 00:31:27.161 } 00:31:27.161 } 00:31:27.161 ], 00:31:27.161 "mp_policy": "active_passive" 00:31:27.161 } 00:31:27.161 } 00:31:27.161 ]' 00:31:27.161 12:43:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:27.161 12:43:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:27.161 12:43:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:27.161 12:43:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:27.161 12:43:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:27.161 12:43:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:31:27.161 12:43:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:31:27.161 12:43:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:31:27.161 12:43:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:31:27.161 12:43:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:27.161 12:43:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:27.423 12:43:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=1d564964-6b7e-40bc-846f-494c07e063a6 00:31:27.423 12:43:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:31:27.423 12:43:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1d564964-6b7e-40bc-846f-494c07e063a6 00:31:27.684 12:43:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:31:27.684 12:43:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=0c699565-84da-4816-9856-9e026b7d50d6 00:31:27.684 12:43:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 0c699565-84da-4816-9856-9e026b7d50d6 00:31:27.945 12:43:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=3c901765-7ae9-4eb5-a330-004db1a9611b 00:31:27.945 12:43:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 3c901765-7ae9-4eb5-a330-004db1a9611b ]] 00:31:27.945 12:43:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 3c901765-7ae9-4eb5-a330-004db1a9611b 5120 00:31:27.945 12:43:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:31:27.945 12:43:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:27.945 12:43:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=3c901765-7ae9-4eb5-a330-004db1a9611b 00:31:27.945 12:43:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:31:27.945 12:43:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 3c901765-7ae9-4eb5-a330-004db1a9611b 00:31:27.945 12:43:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3c901765-7ae9-4eb5-a330-004db1a9611b 00:31:27.945 12:43:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:27.945 12:43:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:27.945 12:43:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:27.945 12:43:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3c901765-7ae9-4eb5-a330-004db1a9611b 00:31:28.209 12:43:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:28.209 { 00:31:28.209 "name": "3c901765-7ae9-4eb5-a330-004db1a9611b", 00:31:28.209 "aliases": [ 00:31:28.209 "lvs/basen1p0" 00:31:28.209 ], 00:31:28.209 "product_name": "Logical Volume", 00:31:28.209 "block_size": 4096, 00:31:28.209 "num_blocks": 5242880, 00:31:28.209 "uuid": "3c901765-7ae9-4eb5-a330-004db1a9611b", 00:31:28.209 "assigned_rate_limits": { 00:31:28.209 "rw_ios_per_sec": 0, 00:31:28.209 "rw_mbytes_per_sec": 0, 00:31:28.209 "r_mbytes_per_sec": 0, 00:31:28.209 "w_mbytes_per_sec": 0 00:31:28.209 }, 00:31:28.209 "claimed": false, 00:31:28.209 "zoned": false, 00:31:28.209 "supported_io_types": { 00:31:28.209 "read": true, 00:31:28.209 "write": true, 00:31:28.209 "unmap": true, 00:31:28.209 "flush": false, 00:31:28.209 "reset": true, 00:31:28.209 "nvme_admin": false, 00:31:28.209 "nvme_io": false, 00:31:28.209 "nvme_io_md": false, 00:31:28.209 "write_zeroes": true, 00:31:28.209 "zcopy": false, 00:31:28.209 "get_zone_info": false, 00:31:28.209 "zone_management": false, 00:31:28.209 "zone_append": false, 00:31:28.209 "compare": false, 00:31:28.209 "compare_and_write": false, 00:31:28.209 "abort": false, 00:31:28.209 "seek_hole": true, 00:31:28.209 "seek_data": true, 00:31:28.209 "copy": false, 00:31:28.209 "nvme_iov_md": false 00:31:28.209 }, 00:31:28.209 "driver_specific": { 00:31:28.209 "lvol": { 00:31:28.209 "lvol_store_uuid": "0c699565-84da-4816-9856-9e026b7d50d6", 00:31:28.209 "base_bdev": "basen1", 00:31:28.209 "thin_provision": true, 00:31:28.209 "num_allocated_clusters": 0, 00:31:28.209 "snapshot": false, 00:31:28.209 "clone": false, 00:31:28.209 "esnap_clone": false 00:31:28.210 } 00:31:28.210 } 00:31:28.210 } 00:31:28.210 ]' 00:31:28.210 12:43:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:28.210 12:43:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:28.210 12:43:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:28.210 12:43:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:31:28.210 12:43:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:31:28.210 12:43:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:31:28.210 12:43:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:31:28.210 12:43:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:31:28.210 12:43:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:31:28.469 12:43:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:31:28.469 12:43:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:31:28.469 12:43:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:31:28.731 12:43:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:31:28.731 12:43:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:31:28.731 12:43:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 3c901765-7ae9-4eb5-a330-004db1a9611b -c cachen1p0 --l2p_dram_limit 2 00:31:28.993 [2024-12-16 12:43:35.882638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.993 [2024-12-16 12:43:35.882684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:28.993 [2024-12-16 12:43:35.882698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:28.993 [2024-12-16 12:43:35.882705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.993 [2024-12-16 12:43:35.882748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.993 [2024-12-16 12:43:35.882757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:28.993 [2024-12-16 12:43:35.882765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:31:28.993 [2024-12-16 12:43:35.882771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.993 [2024-12-16 12:43:35.882788] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:28.993 [2024-12-16 12:43:35.883312] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:28.993 [2024-12-16 12:43:35.883330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.993 [2024-12-16 12:43:35.883336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:28.993 [2024-12-16 12:43:35.883347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.544 ms 00:31:28.993 [2024-12-16 12:43:35.883354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.993 [2024-12-16 12:43:35.883377] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 9dc1892b-efd8-429f-a5c2-6e8723f634fe 00:31:28.993 [2024-12-16 12:43:35.884649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.993 [2024-12-16 12:43:35.884679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:31:28.993 [2024-12-16 12:43:35.884688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:31:28.993 [2024-12-16 12:43:35.884697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.993 [2024-12-16 12:43:35.891529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.993 [2024-12-16 12:43:35.891560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:28.993 [2024-12-16 12:43:35.891568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.795 ms 00:31:28.993 [2024-12-16 12:43:35.891575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.993 [2024-12-16 12:43:35.891636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.993 [2024-12-16 12:43:35.891646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:28.993 [2024-12-16 12:43:35.891652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:28.993 [2024-12-16 12:43:35.891662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.993 [2024-12-16 12:43:35.891702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.993 [2024-12-16 12:43:35.891712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:28.993 [2024-12-16 12:43:35.891719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:28.993 [2024-12-16 12:43:35.891730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.993 [2024-12-16 12:43:35.891746] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:28.993 [2024-12-16 12:43:35.894981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.993 [2024-12-16 12:43:35.895005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:28.993 [2024-12-16 12:43:35.895016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.236 ms 00:31:28.993 [2024-12-16 12:43:35.895022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.993 [2024-12-16 12:43:35.895045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.993 [2024-12-16 12:43:35.895052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:28.993 [2024-12-16 12:43:35.895060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:28.993 [2024-12-16 12:43:35.895065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.993 [2024-12-16 12:43:35.895079] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:31:28.993 [2024-12-16 12:43:35.895207] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:28.993 [2024-12-16 12:43:35.895223] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:28.993 [2024-12-16 12:43:35.895232] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:28.993 [2024-12-16 12:43:35.895241] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:28.993 [2024-12-16 12:43:35.895248] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:28.993 [2024-12-16 12:43:35.895255] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:28.993 [2024-12-16 12:43:35.895261] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:28.993 [2024-12-16 12:43:35.895272] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:28.993 [2024-12-16 12:43:35.895278] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:28.993 [2024-12-16 12:43:35.895286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.993 [2024-12-16 12:43:35.895292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:28.993 [2024-12-16 12:43:35.895300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.207 ms 00:31:28.993 [2024-12-16 12:43:35.895306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.993 [2024-12-16 12:43:35.895375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.993 [2024-12-16 12:43:35.895387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:28.993 [2024-12-16 12:43:35.895395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:31:28.993 [2024-12-16 12:43:35.895401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.993 [2024-12-16 12:43:35.895478] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:28.993 [2024-12-16 12:43:35.895487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:28.993 [2024-12-16 12:43:35.895495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:28.993 [2024-12-16 12:43:35.895501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.993 [2024-12-16 12:43:35.895509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:28.993 [2024-12-16 12:43:35.895514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:28.993 [2024-12-16 12:43:35.895522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:28.993 [2024-12-16 12:43:35.895528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:28.993 [2024-12-16 12:43:35.895535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:28.993 [2024-12-16 12:43:35.895540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.993 [2024-12-16 12:43:35.895546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:28.993 [2024-12-16 12:43:35.895552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:28.993 [2024-12-16 12:43:35.895559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.993 [2024-12-16 12:43:35.895566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:28.993 [2024-12-16 12:43:35.895573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:28.993 [2024-12-16 12:43:35.895579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.993 [2024-12-16 12:43:35.895587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:28.993 [2024-12-16 12:43:35.895592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:28.993 [2024-12-16 12:43:35.895599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.993 [2024-12-16 12:43:35.895604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:28.993 [2024-12-16 12:43:35.895611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:28.993 [2024-12-16 12:43:35.895616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:28.993 [2024-12-16 12:43:35.895623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:28.993 [2024-12-16 12:43:35.895628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:28.993 [2024-12-16 12:43:35.895635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:28.993 [2024-12-16 12:43:35.895640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:28.993 [2024-12-16 12:43:35.895647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:28.993 [2024-12-16 12:43:35.895652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:28.993 [2024-12-16 12:43:35.895658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:28.993 [2024-12-16 12:43:35.895663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:28.993 [2024-12-16 12:43:35.895669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:28.993 [2024-12-16 12:43:35.895674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:28.993 [2024-12-16 12:43:35.895682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:28.993 [2024-12-16 12:43:35.895688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.993 [2024-12-16 12:43:35.895694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:28.993 [2024-12-16 12:43:35.895699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:28.993 [2024-12-16 12:43:35.895705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.993 [2024-12-16 12:43:35.895712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:28.993 [2024-12-16 12:43:35.895719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:28.993 [2024-12-16 12:43:35.895724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.993 [2024-12-16 12:43:35.895730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:28.993 [2024-12-16 12:43:35.895735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:28.993 [2024-12-16 12:43:35.895742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.993 [2024-12-16 12:43:35.895746] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:28.994 [2024-12-16 12:43:35.895754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:28.994 [2024-12-16 12:43:35.895761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:28.994 [2024-12-16 12:43:35.895768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.994 [2024-12-16 12:43:35.895774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:28.994 [2024-12-16 12:43:35.895782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:28.994 [2024-12-16 12:43:35.895788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:28.994 [2024-12-16 12:43:35.895795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:28.994 [2024-12-16 12:43:35.895800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:28.994 [2024-12-16 12:43:35.895807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:28.994 [2024-12-16 12:43:35.895813] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:28.994 [2024-12-16 12:43:35.895822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:28.994 [2024-12-16 12:43:35.895830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:28.994 [2024-12-16 12:43:35.895838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:28.994 [2024-12-16 12:43:35.895844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:28.994 [2024-12-16 12:43:35.895851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:28.994 [2024-12-16 12:43:35.895857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:28.994 [2024-12-16 12:43:35.895863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:28.994 [2024-12-16 12:43:35.895869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:28.994 [2024-12-16 12:43:35.895876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:28.994 [2024-12-16 12:43:35.895882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:28.994 [2024-12-16 12:43:35.895891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:28.994 [2024-12-16 12:43:35.895897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:28.994 [2024-12-16 12:43:35.895904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:28.994 [2024-12-16 12:43:35.895911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:28.994 [2024-12-16 12:43:35.895919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:28.994 [2024-12-16 12:43:35.895924] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:28.994 [2024-12-16 12:43:35.895933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:28.994 [2024-12-16 12:43:35.895939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:28.994 [2024-12-16 12:43:35.895948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:28.994 [2024-12-16 12:43:35.895954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:28.994 [2024-12-16 12:43:35.895961] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:28.994 [2024-12-16 12:43:35.895967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.994 [2024-12-16 12:43:35.895975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:28.994 [2024-12-16 12:43:35.895986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.544 ms 00:31:28.994 [2024-12-16 12:43:35.895993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.994 [2024-12-16 12:43:35.896035] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:28.994 [2024-12-16 12:43:35.896053] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:32.302 [2024-12-16 12:43:39.141569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.302 [2024-12-16 12:43:39.141607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:32.302 [2024-12-16 12:43:39.141619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3245.522 ms 00:31:32.302 [2024-12-16 12:43:39.141627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.302 [2024-12-16 12:43:39.164871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.302 [2024-12-16 12:43:39.164907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:32.302 [2024-12-16 12:43:39.164917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.073 ms 00:31:32.302 [2024-12-16 12:43:39.164926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.302 [2024-12-16 12:43:39.164986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.302 [2024-12-16 12:43:39.164996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:32.302 [2024-12-16 12:43:39.165002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:31:32.302 [2024-12-16 12:43:39.165014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.302 [2024-12-16 12:43:39.191839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.302 [2024-12-16 12:43:39.191869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:32.302 [2024-12-16 12:43:39.191877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.798 ms 00:31:32.302 [2024-12-16 12:43:39.191886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.302 [2024-12-16 12:43:39.191907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.302 [2024-12-16 12:43:39.191918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:32.302 [2024-12-16 12:43:39.191925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:32.302 [2024-12-16 12:43:39.191932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.302 [2024-12-16 12:43:39.192360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.302 [2024-12-16 12:43:39.192379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:32.302 [2024-12-16 12:43:39.192391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.392 ms 00:31:32.302 [2024-12-16 12:43:39.192400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.302 [2024-12-16 12:43:39.192432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.302 [2024-12-16 12:43:39.192441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:32.302 [2024-12-16 12:43:39.192450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:31:32.302 [2024-12-16 12:43:39.192460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.302 [2024-12-16 12:43:39.205555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.302 [2024-12-16 12:43:39.205584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:32.302 [2024-12-16 12:43:39.205592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.081 ms 00:31:32.302 [2024-12-16 12:43:39.205600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.302 [2024-12-16 12:43:39.235152] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:32.302 [2024-12-16 12:43:39.236079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.302 [2024-12-16 12:43:39.236106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:32.303 [2024-12-16 12:43:39.236117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.421 ms 00:31:32.303 [2024-12-16 12:43:39.236124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.303 [2024-12-16 12:43:39.259508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.303 [2024-12-16 12:43:39.259536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:31:32.303 [2024-12-16 12:43:39.259547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.355 ms 00:31:32.303 [2024-12-16 12:43:39.259553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.303 [2024-12-16 12:43:39.259626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.303 [2024-12-16 12:43:39.259636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:32.303 [2024-12-16 12:43:39.259647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:31:32.303 [2024-12-16 12:43:39.259653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.303 [2024-12-16 12:43:39.277364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.303 [2024-12-16 12:43:39.277391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:31:32.303 [2024-12-16 12:43:39.277402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.675 ms 00:31:32.303 [2024-12-16 12:43:39.277409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.303 [2024-12-16 12:43:39.295512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.303 [2024-12-16 12:43:39.295536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:31:32.303 [2024-12-16 12:43:39.295546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.068 ms 00:31:32.303 [2024-12-16 12:43:39.295552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.303 [2024-12-16 12:43:39.295996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.303 [2024-12-16 12:43:39.296004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:32.303 [2024-12-16 12:43:39.296013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.415 ms 00:31:32.303 [2024-12-16 12:43:39.296020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.303 [2024-12-16 12:43:39.377077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.303 [2024-12-16 12:43:39.377113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:31:32.303 [2024-12-16 12:43:39.377128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 81.028 ms 00:31:32.303 [2024-12-16 12:43:39.377136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.303 [2024-12-16 12:43:39.396749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.303 [2024-12-16 12:43:39.396778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:31:32.303 [2024-12-16 12:43:39.396789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.540 ms 00:31:32.303 [2024-12-16 12:43:39.396795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.564 [2024-12-16 12:43:39.415329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.564 [2024-12-16 12:43:39.415355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:31:32.564 [2024-12-16 12:43:39.415365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.501 ms 00:31:32.564 [2024-12-16 12:43:39.415372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.564 [2024-12-16 12:43:39.434390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.564 [2024-12-16 12:43:39.434551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:32.564 [2024-12-16 12:43:39.434569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.985 ms 00:31:32.564 [2024-12-16 12:43:39.434575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.564 [2024-12-16 12:43:39.434607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.564 [2024-12-16 12:43:39.434615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:32.564 [2024-12-16 12:43:39.434626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:32.564 [2024-12-16 12:43:39.434633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.564 [2024-12-16 12:43:39.434700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.564 [2024-12-16 12:43:39.434712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:32.564 [2024-12-16 12:43:39.434720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:31:32.564 [2024-12-16 12:43:39.434726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.564 [2024-12-16 12:43:39.435549] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3552.532 ms, result 0 00:31:32.564 { 00:31:32.564 "name": "ftl", 00:31:32.564 "uuid": "9dc1892b-efd8-429f-a5c2-6e8723f634fe" 00:31:32.564 } 00:31:32.564 12:43:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:31:32.564 [2024-12-16 12:43:39.642951] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:32.564 12:43:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:31:32.825 12:43:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:31:33.086 [2024-12-16 12:43:40.035275] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:33.086 12:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:31:33.347 [2024-12-16 12:43:40.227593] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:33.348 12:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:33.609 Fill FTL, iteration 1 00:31:33.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:31:33.609 12:43:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:31:33.609 12:43:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:31:33.609 12:43:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:31:33.609 12:43:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:31:33.609 12:43:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:31:33.609 12:43:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:31:33.609 12:43:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:31:33.609 12:43:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:31:33.609 12:43:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:31:33.609 12:43:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:33.609 12:43:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:31:33.609 12:43:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:31:33.609 12:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:33.609 12:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:33.609 12:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:33.609 12:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:31:33.609 12:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=85987 00:31:33.609 12:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:31:33.609 12:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 85987 /var/tmp/spdk.tgt.sock 00:31:33.609 12:43:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85987 ']' 00:31:33.609 12:43:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:31:33.610 12:43:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:33.610 12:43:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:31:33.610 12:43:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:33.610 12:43:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:33.610 12:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:31:33.610 [2024-12-16 12:43:40.654015] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:33.610 [2024-12-16 12:43:40.656136] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85987 ] 00:31:33.870 [2024-12-16 12:43:40.814879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.870 [2024-12-16 12:43:40.910530] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:34.442 12:43:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:34.442 12:43:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:34.442 12:43:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:31:34.703 ftln1 00:31:34.703 12:43:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:31:34.703 12:43:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:31:34.964 12:43:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:31:34.964 12:43:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 85987 00:31:34.964 12:43:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 85987 ']' 00:31:34.964 12:43:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 85987 00:31:34.964 12:43:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:34.964 12:43:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:34.964 12:43:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85987 00:31:34.964 12:43:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:34.964 12:43:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:34.964 12:43:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85987' 00:31:34.964 killing process with pid 85987 00:31:34.964 12:43:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 85987 00:31:34.964 12:43:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 85987 00:31:36.350 12:43:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:31:36.350 12:43:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:31:36.611 [2024-12-16 12:43:43.498803] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:36.611 [2024-12-16 12:43:43.498919] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86031 ] 00:31:36.611 [2024-12-16 12:43:43.659507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.872 [2024-12-16 12:43:43.754131] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:38.249  [2024-12-16T12:43:46.288Z] Copying: 237/1024 [MB] (237 MBps) [2024-12-16T12:43:47.224Z] Copying: 505/1024 [MB] (268 MBps) [2024-12-16T12:43:48.159Z] Copying: 775/1024 [MB] (270 MBps) [2024-12-16T12:43:48.729Z] Copying: 1024/1024 [MB] (average 260 MBps) 00:31:41.623 00:31:41.623 Calculate MD5 checksum, iteration 1 00:31:41.623 12:43:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:31:41.623 12:43:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:31:41.623 12:43:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:41.623 12:43:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:41.623 12:43:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:41.623 12:43:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:41.623 12:43:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:41.623 12:43:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:41.623 [2024-12-16 12:43:48.690301] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:41.623 [2024-12-16 12:43:48.690418] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86084 ] 00:31:41.882 [2024-12-16 12:43:48.846763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.882 [2024-12-16 12:43:48.923760] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:43.258  [2024-12-16T12:43:50.931Z] Copying: 682/1024 [MB] (682 MBps) [2024-12-16T12:43:51.499Z] Copying: 1024/1024 [MB] (average 675 MBps) 00:31:44.393 00:31:44.393 12:43:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:31:44.393 12:43:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:46.312 12:43:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:46.312 Fill FTL, iteration 2 00:31:46.312 12:43:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b7bfa0b9e9eb59bc4ff377d65c2e8a83 00:31:46.312 12:43:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:46.312 12:43:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:46.312 12:43:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:31:46.313 12:43:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:46.313 12:43:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:46.313 12:43:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:46.313 12:43:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:46.313 12:43:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:46.313 12:43:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:46.576 [2024-12-16 12:43:53.447134] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:46.576 [2024-12-16 12:43:53.447265] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86160 ] 00:31:46.576 [2024-12-16 12:43:53.603101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.576 [2024-12-16 12:43:53.678450] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.951  [2024-12-16T12:43:55.992Z] Copying: 261/1024 [MB] (261 MBps) [2024-12-16T12:43:57.406Z] Copying: 517/1024 [MB] (256 MBps) [2024-12-16T12:43:57.990Z] Copying: 777/1024 [MB] (260 MBps) [2024-12-16T12:43:58.559Z] Copying: 1024/1024 [MB] (average 257 MBps) 00:31:51.453 00:31:51.453 Calculate MD5 checksum, iteration 2 00:31:51.453 12:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:31:51.453 12:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:31:51.453 12:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:51.453 12:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:51.453 12:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:51.453 12:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:51.453 12:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:51.453 12:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:51.712 [2024-12-16 12:43:58.611411] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:51.712 [2024-12-16 12:43:58.611666] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86216 ] 00:31:51.712 [2024-12-16 12:43:58.766233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:51.971 [2024-12-16 12:43:58.843724] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:53.344  [2024-12-16T12:44:01.017Z] Copying: 662/1024 [MB] (662 MBps) [2024-12-16T12:44:04.320Z] Copying: 1024/1024 [MB] (average 659 MBps) 00:31:57.214 00:31:57.214 12:44:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:31:57.214 12:44:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:59.131 12:44:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:59.131 12:44:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=8427733ba994dda3767b2cf5d65796fd 00:31:59.131 12:44:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:59.131 12:44:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:59.131 12:44:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:59.131 [2024-12-16 12:44:05.918609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:59.131 [2024-12-16 12:44:05.918663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:59.131 [2024-12-16 12:44:05.918677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:59.131 [2024-12-16 12:44:05.918684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.131 [2024-12-16 12:44:05.918703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:59.131 [2024-12-16 12:44:05.918714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:59.131 [2024-12-16 12:44:05.918721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:59.131 [2024-12-16 12:44:05.918727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.131 [2024-12-16 12:44:05.918753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:59.131 [2024-12-16 12:44:05.918761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:59.131 [2024-12-16 12:44:05.918768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:59.131 [2024-12-16 12:44:05.918774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.131 [2024-12-16 12:44:05.918831] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.211 ms, result 0 00:31:59.131 true 00:31:59.131 12:44:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:59.131 { 00:31:59.131 "name": "ftl", 00:31:59.131 "properties": [ 00:31:59.131 { 00:31:59.131 "name": "superblock_version", 00:31:59.131 "value": 5, 00:31:59.131 "read-only": true 00:31:59.131 }, 00:31:59.131 { 00:31:59.131 "name": "base_device", 00:31:59.131 "bands": [ 00:31:59.131 { 00:31:59.131 "id": 0, 00:31:59.131 "state": "FREE", 00:31:59.131 "validity": 0.0 00:31:59.131 }, 00:31:59.131 { 00:31:59.131 "id": 1, 00:31:59.131 "state": "FREE", 00:31:59.131 "validity": 0.0 00:31:59.131 }, 00:31:59.131 { 00:31:59.131 "id": 2, 00:31:59.131 "state": "FREE", 00:31:59.131 "validity": 0.0 00:31:59.131 }, 00:31:59.131 { 00:31:59.131 "id": 3, 00:31:59.131 "state": "FREE", 00:31:59.131 "validity": 0.0 00:31:59.131 }, 00:31:59.131 { 00:31:59.131 "id": 4, 00:31:59.131 "state": "FREE", 00:31:59.131 "validity": 0.0 00:31:59.131 }, 00:31:59.131 { 00:31:59.131 "id": 5, 00:31:59.131 "state": "FREE", 00:31:59.131 "validity": 0.0 00:31:59.131 }, 00:31:59.131 { 00:31:59.131 "id": 6, 00:31:59.131 "state": "FREE", 00:31:59.131 "validity": 0.0 00:31:59.131 }, 00:31:59.131 { 00:31:59.131 "id": 7, 00:31:59.131 "state": "FREE", 00:31:59.131 "validity": 0.0 00:31:59.131 }, 00:31:59.131 { 00:31:59.131 "id": 8, 00:31:59.131 "state": "FREE", 00:31:59.131 "validity": 0.0 00:31:59.131 }, 00:31:59.131 { 00:31:59.131 "id": 9, 00:31:59.131 "state": "FREE", 00:31:59.131 "validity": 0.0 00:31:59.131 }, 00:31:59.131 { 00:31:59.131 "id": 10, 00:31:59.131 "state": "FREE", 00:31:59.131 "validity": 0.0 00:31:59.131 }, 00:31:59.131 { 00:31:59.131 "id": 11, 00:31:59.131 "state": "FREE", 00:31:59.131 "validity": 0.0 00:31:59.131 }, 00:31:59.131 { 00:31:59.131 "id": 12, 00:31:59.131 "state": "FREE", 00:31:59.131 "validity": 0.0 00:31:59.131 }, 00:31:59.131 { 00:31:59.131 "id": 13, 00:31:59.131 "state": "FREE", 00:31:59.131 "validity": 0.0 00:31:59.131 }, 00:31:59.131 { 00:31:59.131 "id": 14, 00:31:59.131 "state": "FREE", 00:31:59.131 "validity": 0.0 00:31:59.131 }, 00:31:59.131 { 00:31:59.131 "id": 15, 00:31:59.131 "state": "FREE", 00:31:59.131 "validity": 0.0 00:31:59.131 }, 00:31:59.131 { 00:31:59.131 "id": 16, 00:31:59.131 "state": "FREE", 00:31:59.131 "validity": 0.0 00:31:59.131 }, 00:31:59.131 { 00:31:59.131 "id": 17, 00:31:59.132 "state": "FREE", 00:31:59.132 "validity": 0.0 00:31:59.132 } 00:31:59.132 ], 00:31:59.132 "read-only": true 00:31:59.132 }, 00:31:59.132 { 00:31:59.132 "name": "cache_device", 00:31:59.132 "type": "bdev", 00:31:59.132 "chunks": [ 00:31:59.132 { 00:31:59.132 "id": 0, 00:31:59.132 "state": "INACTIVE", 00:31:59.132 "utilization": 0.0 00:31:59.132 }, 00:31:59.132 { 00:31:59.132 "id": 1, 00:31:59.132 "state": "CLOSED", 00:31:59.132 "utilization": 1.0 00:31:59.132 }, 00:31:59.132 { 00:31:59.132 "id": 2, 00:31:59.132 "state": "CLOSED", 00:31:59.132 "utilization": 1.0 00:31:59.132 }, 00:31:59.132 { 00:31:59.132 "id": 3, 00:31:59.132 "state": "OPEN", 00:31:59.132 "utilization": 0.001953125 00:31:59.132 }, 00:31:59.132 { 00:31:59.132 "id": 4, 00:31:59.132 "state": "OPEN", 00:31:59.132 "utilization": 0.0 00:31:59.132 } 00:31:59.132 ], 00:31:59.132 "read-only": true 00:31:59.132 }, 00:31:59.132 { 00:31:59.132 "name": "verbose_mode", 00:31:59.132 "value": true, 00:31:59.132 "unit": "", 00:31:59.132 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:59.132 }, 00:31:59.132 { 00:31:59.132 "name": "prep_upgrade_on_shutdown", 00:31:59.132 "value": false, 00:31:59.132 "unit": "", 00:31:59.132 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:59.132 } 00:31:59.132 ] 00:31:59.132 } 00:31:59.132 12:44:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:31:59.392 [2024-12-16 12:44:06.301673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:59.392 [2024-12-16 12:44:06.301707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:59.392 [2024-12-16 12:44:06.301717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:59.392 [2024-12-16 12:44:06.301724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.392 [2024-12-16 12:44:06.301742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:59.392 [2024-12-16 12:44:06.301749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:59.392 [2024-12-16 12:44:06.301756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:59.392 [2024-12-16 12:44:06.301762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.392 [2024-12-16 12:44:06.301777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:59.392 [2024-12-16 12:44:06.301783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:59.392 [2024-12-16 12:44:06.301788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:59.392 [2024-12-16 12:44:06.301794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.392 [2024-12-16 12:44:06.301836] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.161 ms, result 0 00:31:59.392 true 00:31:59.392 12:44:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:59.392 12:44:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:31:59.392 12:44:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:59.392 12:44:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:31:59.392 12:44:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:31:59.392 12:44:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:59.652 [2024-12-16 12:44:06.673941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:59.652 [2024-12-16 12:44:06.674076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:59.652 [2024-12-16 12:44:06.674120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:59.652 [2024-12-16 12:44:06.674138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.652 [2024-12-16 12:44:06.674181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:59.652 [2024-12-16 12:44:06.674200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:59.652 [2024-12-16 12:44:06.674216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:59.652 [2024-12-16 12:44:06.674230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.652 [2024-12-16 12:44:06.674253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:59.652 [2024-12-16 12:44:06.674269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:59.652 [2024-12-16 12:44:06.674285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:59.653 [2024-12-16 12:44:06.674323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.653 [2024-12-16 12:44:06.674378] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.423 ms, result 0 00:31:59.653 true 00:31:59.653 12:44:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:59.913 { 00:31:59.914 "name": "ftl", 00:31:59.914 "properties": [ 00:31:59.914 { 00:31:59.914 "name": "superblock_version", 00:31:59.914 "value": 5, 00:31:59.914 "read-only": true 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "name": "base_device", 00:31:59.914 "bands": [ 00:31:59.914 { 00:31:59.914 "id": 0, 00:31:59.914 "state": "FREE", 00:31:59.914 "validity": 0.0 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "id": 1, 00:31:59.914 "state": "FREE", 00:31:59.914 "validity": 0.0 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "id": 2, 00:31:59.914 "state": "FREE", 00:31:59.914 "validity": 0.0 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "id": 3, 00:31:59.914 "state": "FREE", 00:31:59.914 "validity": 0.0 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "id": 4, 00:31:59.914 "state": "FREE", 00:31:59.914 "validity": 0.0 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "id": 5, 00:31:59.914 "state": "FREE", 00:31:59.914 "validity": 0.0 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "id": 6, 00:31:59.914 "state": "FREE", 00:31:59.914 "validity": 0.0 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "id": 7, 00:31:59.914 "state": "FREE", 00:31:59.914 "validity": 0.0 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "id": 8, 00:31:59.914 "state": "FREE", 00:31:59.914 "validity": 0.0 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "id": 9, 00:31:59.914 "state": "FREE", 00:31:59.914 "validity": 0.0 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "id": 10, 00:31:59.914 "state": "FREE", 00:31:59.914 "validity": 0.0 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "id": 11, 00:31:59.914 "state": "FREE", 00:31:59.914 "validity": 0.0 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "id": 12, 00:31:59.914 "state": "FREE", 00:31:59.914 "validity": 0.0 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "id": 13, 00:31:59.914 "state": "FREE", 00:31:59.914 "validity": 0.0 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "id": 14, 00:31:59.914 "state": "FREE", 00:31:59.914 "validity": 0.0 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "id": 15, 00:31:59.914 "state": "FREE", 00:31:59.914 "validity": 0.0 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "id": 16, 00:31:59.914 "state": "FREE", 00:31:59.914 "validity": 0.0 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "id": 17, 00:31:59.914 "state": "FREE", 00:31:59.914 "validity": 0.0 00:31:59.914 } 00:31:59.914 ], 00:31:59.914 "read-only": true 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "name": "cache_device", 00:31:59.914 "type": "bdev", 00:31:59.914 "chunks": [ 00:31:59.914 { 00:31:59.914 "id": 0, 00:31:59.914 "state": "INACTIVE", 00:31:59.914 "utilization": 0.0 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "id": 1, 00:31:59.914 "state": "CLOSED", 00:31:59.914 "utilization": 1.0 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "id": 2, 00:31:59.914 "state": "CLOSED", 00:31:59.914 "utilization": 1.0 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "id": 3, 00:31:59.914 "state": "OPEN", 00:31:59.914 "utilization": 0.001953125 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "id": 4, 00:31:59.914 "state": "OPEN", 00:31:59.914 "utilization": 0.0 00:31:59.914 } 00:31:59.914 ], 00:31:59.914 "read-only": true 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "name": "verbose_mode", 00:31:59.914 "value": true, 00:31:59.914 "unit": "", 00:31:59.914 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:59.914 }, 00:31:59.914 { 00:31:59.914 "name": "prep_upgrade_on_shutdown", 00:31:59.914 "value": true, 00:31:59.914 "unit": "", 00:31:59.914 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:59.914 } 00:31:59.914 ] 00:31:59.914 } 00:31:59.914 12:44:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:31:59.914 12:44:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85872 ]] 00:31:59.914 12:44:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85872 00:31:59.914 12:44:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 85872 ']' 00:31:59.914 12:44:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 85872 00:31:59.914 12:44:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:59.914 12:44:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:59.914 12:44:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85872 00:31:59.914 killing process with pid 85872 00:31:59.914 12:44:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:59.914 12:44:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:59.914 12:44:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85872' 00:31:59.914 12:44:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 85872 00:31:59.914 12:44:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 85872 00:32:00.484 [2024-12-16 12:44:07.477678] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:00.484 [2024-12-16 12:44:07.488520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:00.484 [2024-12-16 12:44:07.488555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:00.484 [2024-12-16 12:44:07.488566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:00.484 [2024-12-16 12:44:07.488573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:00.484 [2024-12-16 12:44:07.488593] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:00.484 [2024-12-16 12:44:07.490884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:00.484 [2024-12-16 12:44:07.490912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:00.484 [2024-12-16 12:44:07.490921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.278 ms 00:32:00.484 [2024-12-16 12:44:07.490928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.063453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:08.632 [2024-12-16 12:44:15.063521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:32:08.632 [2024-12-16 12:44:15.063540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7572.471 ms 00:32:08.632 [2024-12-16 12:44:15.063546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.064791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:08.632 [2024-12-16 12:44:15.064807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:32:08.632 [2024-12-16 12:44:15.064816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.231 ms 00:32:08.632 [2024-12-16 12:44:15.064823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.065692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:08.632 [2024-12-16 12:44:15.065713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:32:08.632 [2024-12-16 12:44:15.065721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.846 ms 00:32:08.632 [2024-12-16 12:44:15.065731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.074400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:08.632 [2024-12-16 12:44:15.074429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:32:08.632 [2024-12-16 12:44:15.074437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.642 ms 00:32:08.632 [2024-12-16 12:44:15.074444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.080363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:08.632 [2024-12-16 12:44:15.080392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:32:08.632 [2024-12-16 12:44:15.080402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.892 ms 00:32:08.632 [2024-12-16 12:44:15.080408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.080472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:08.632 [2024-12-16 12:44:15.080485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:32:08.632 [2024-12-16 12:44:15.080493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:32:08.632 [2024-12-16 12:44:15.080499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.088401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:08.632 [2024-12-16 12:44:15.088426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:32:08.632 [2024-12-16 12:44:15.088434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.889 ms 00:32:08.632 [2024-12-16 12:44:15.088440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.096400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:08.632 [2024-12-16 12:44:15.096425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:32:08.632 [2024-12-16 12:44:15.096433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.935 ms 00:32:08.632 [2024-12-16 12:44:15.096439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.104070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:08.632 [2024-12-16 12:44:15.104236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:32:08.632 [2024-12-16 12:44:15.104250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.606 ms 00:32:08.632 [2024-12-16 12:44:15.104256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.111877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:08.632 [2024-12-16 12:44:15.111977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:32:08.632 [2024-12-16 12:44:15.111989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.560 ms 00:32:08.632 [2024-12-16 12:44:15.111995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.112018] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:32:08.632 [2024-12-16 12:44:15.112037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:08.632 [2024-12-16 12:44:15.112045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:32:08.632 [2024-12-16 12:44:15.112051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:32:08.632 [2024-12-16 12:44:15.112058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:08.632 [2024-12-16 12:44:15.112064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:08.632 [2024-12-16 12:44:15.112070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:08.632 [2024-12-16 12:44:15.112076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:08.632 [2024-12-16 12:44:15.112082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:08.632 [2024-12-16 12:44:15.112088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:08.632 [2024-12-16 12:44:15.112094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:08.632 [2024-12-16 12:44:15.112100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:08.632 [2024-12-16 12:44:15.112105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:08.632 [2024-12-16 12:44:15.112111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:08.632 [2024-12-16 12:44:15.112117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:08.632 [2024-12-16 12:44:15.112122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:08.632 [2024-12-16 12:44:15.112128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:08.632 [2024-12-16 12:44:15.112133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:08.632 [2024-12-16 12:44:15.112141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:08.632 [2024-12-16 12:44:15.112149] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:32:08.632 [2024-12-16 12:44:15.112168] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 9dc1892b-efd8-429f-a5c2-6e8723f634fe 00:32:08.632 [2024-12-16 12:44:15.112175] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:32:08.632 [2024-12-16 12:44:15.112181] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:32:08.632 [2024-12-16 12:44:15.112187] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:32:08.632 [2024-12-16 12:44:15.112194] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:32:08.632 [2024-12-16 12:44:15.112203] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:32:08.632 [2024-12-16 12:44:15.112211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:32:08.632 [2024-12-16 12:44:15.112220] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:32:08.632 [2024-12-16 12:44:15.112225] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:32:08.632 [2024-12-16 12:44:15.112235] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:32:08.632 [2024-12-16 12:44:15.112242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:08.632 [2024-12-16 12:44:15.112248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:32:08.632 [2024-12-16 12:44:15.112256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.225 ms 00:32:08.632 [2024-12-16 12:44:15.112261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.122400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:08.632 [2024-12-16 12:44:15.122499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:32:08.632 [2024-12-16 12:44:15.122516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.125 ms 00:32:08.632 [2024-12-16 12:44:15.122523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.122813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:08.632 [2024-12-16 12:44:15.122821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:32:08.632 [2024-12-16 12:44:15.122827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.274 ms 00:32:08.632 [2024-12-16 12:44:15.122833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.157841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:08.632 [2024-12-16 12:44:15.157964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:08.632 [2024-12-16 12:44:15.157978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:08.632 [2024-12-16 12:44:15.157985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.158013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:08.632 [2024-12-16 12:44:15.158020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:08.632 [2024-12-16 12:44:15.158027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:08.632 [2024-12-16 12:44:15.158033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.158102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:08.632 [2024-12-16 12:44:15.158111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:08.632 [2024-12-16 12:44:15.158122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:08.632 [2024-12-16 12:44:15.158129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.158142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:08.632 [2024-12-16 12:44:15.158148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:08.632 [2024-12-16 12:44:15.158168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:08.632 [2024-12-16 12:44:15.158174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.220757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:08.632 [2024-12-16 12:44:15.220885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:08.632 [2024-12-16 12:44:15.220904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:08.632 [2024-12-16 12:44:15.220911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.271787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:08.632 [2024-12-16 12:44:15.271915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:08.632 [2024-12-16 12:44:15.271929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:08.632 [2024-12-16 12:44:15.271935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.272002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:08.632 [2024-12-16 12:44:15.272011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:08.632 [2024-12-16 12:44:15.272019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:08.632 [2024-12-16 12:44:15.272026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.272079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:08.632 [2024-12-16 12:44:15.272087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:08.632 [2024-12-16 12:44:15.272095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:08.632 [2024-12-16 12:44:15.272101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.272192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:08.632 [2024-12-16 12:44:15.272201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:08.632 [2024-12-16 12:44:15.272208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:08.632 [2024-12-16 12:44:15.272215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.272248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:08.632 [2024-12-16 12:44:15.272255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:32:08.632 [2024-12-16 12:44:15.272262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:08.632 [2024-12-16 12:44:15.272269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.272304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:08.632 [2024-12-16 12:44:15.272312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:08.632 [2024-12-16 12:44:15.272319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:08.632 [2024-12-16 12:44:15.272326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.272367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:08.632 [2024-12-16 12:44:15.272376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:08.632 [2024-12-16 12:44:15.272382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:08.632 [2024-12-16 12:44:15.272387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:08.632 [2024-12-16 12:44:15.272499] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7783.923 ms, result 0 00:32:13.924 12:44:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:13.924 12:44:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:32:13.924 12:44:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:13.924 12:44:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:13.924 12:44:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:13.924 12:44:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86421 00:32:13.924 12:44:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:13.924 12:44:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:13.924 12:44:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86421 00:32:13.924 12:44:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 86421 ']' 00:32:13.924 12:44:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:13.924 12:44:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:13.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:13.924 12:44:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:13.924 12:44:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:13.924 12:44:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:13.924 [2024-12-16 12:44:20.216601] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:32:13.924 [2024-12-16 12:44:20.216724] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86421 ] 00:32:13.924 [2024-12-16 12:44:20.372004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.924 [2024-12-16 12:44:20.459798] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:14.187 [2024-12-16 12:44:21.091833] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:14.187 [2024-12-16 12:44:21.091895] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:14.187 [2024-12-16 12:44:21.240752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.187 [2024-12-16 12:44:21.240789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:14.187 [2024-12-16 12:44:21.240802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:14.187 [2024-12-16 12:44:21.240808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.187 [2024-12-16 12:44:21.240855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.187 [2024-12-16 12:44:21.240864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:14.187 [2024-12-16 12:44:21.240870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:32:14.187 [2024-12-16 12:44:21.240876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.187 [2024-12-16 12:44:21.240895] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:14.187 [2024-12-16 12:44:21.241704] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:14.187 [2024-12-16 12:44:21.241735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.187 [2024-12-16 12:44:21.241743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:14.187 [2024-12-16 12:44:21.241752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.847 ms 00:32:14.187 [2024-12-16 12:44:21.241758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.187 [2024-12-16 12:44:21.243080] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:32:14.187 [2024-12-16 12:44:21.253589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.187 [2024-12-16 12:44:21.253618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:32:14.187 [2024-12-16 12:44:21.253632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.512 ms 00:32:14.187 [2024-12-16 12:44:21.253639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.187 [2024-12-16 12:44:21.253688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.187 [2024-12-16 12:44:21.253696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:32:14.187 [2024-12-16 12:44:21.253702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:32:14.187 [2024-12-16 12:44:21.253708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.187 [2024-12-16 12:44:21.259953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.187 [2024-12-16 12:44:21.259977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:14.187 [2024-12-16 12:44:21.259984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.201 ms 00:32:14.187 [2024-12-16 12:44:21.259990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.187 [2024-12-16 12:44:21.260113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.187 [2024-12-16 12:44:21.260121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:14.187 [2024-12-16 12:44:21.260128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.108 ms 00:32:14.187 [2024-12-16 12:44:21.260134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.187 [2024-12-16 12:44:21.260197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.187 [2024-12-16 12:44:21.260207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:14.187 [2024-12-16 12:44:21.260214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:14.187 [2024-12-16 12:44:21.260221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.187 [2024-12-16 12:44:21.260240] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:14.187 [2024-12-16 12:44:21.263261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.187 [2024-12-16 12:44:21.263284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:14.187 [2024-12-16 12:44:21.263291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.026 ms 00:32:14.187 [2024-12-16 12:44:21.263299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.187 [2024-12-16 12:44:21.263322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.187 [2024-12-16 12:44:21.263329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:14.187 [2024-12-16 12:44:21.263335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:14.187 [2024-12-16 12:44:21.263341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.187 [2024-12-16 12:44:21.263359] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:32:14.187 [2024-12-16 12:44:21.263377] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:32:14.187 [2024-12-16 12:44:21.263408] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:32:14.187 [2024-12-16 12:44:21.263424] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:32:14.187 [2024-12-16 12:44:21.263507] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:14.187 [2024-12-16 12:44:21.263516] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:14.187 [2024-12-16 12:44:21.263525] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:14.187 [2024-12-16 12:44:21.263534] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:14.187 [2024-12-16 12:44:21.263540] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:14.187 [2024-12-16 12:44:21.263548] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:14.187 [2024-12-16 12:44:21.263554] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:14.187 [2024-12-16 12:44:21.263560] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:14.187 [2024-12-16 12:44:21.263566] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:14.187 [2024-12-16 12:44:21.263573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.187 [2024-12-16 12:44:21.263579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:14.187 [2024-12-16 12:44:21.263585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.217 ms 00:32:14.187 [2024-12-16 12:44:21.263591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.187 [2024-12-16 12:44:21.263655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.187 [2024-12-16 12:44:21.263663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:14.187 [2024-12-16 12:44:21.263671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:32:14.187 [2024-12-16 12:44:21.263677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.187 [2024-12-16 12:44:21.263752] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:14.187 [2024-12-16 12:44:21.263760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:14.187 [2024-12-16 12:44:21.263767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:14.187 [2024-12-16 12:44:21.263773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:14.187 [2024-12-16 12:44:21.263779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:14.187 [2024-12-16 12:44:21.263784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:14.187 [2024-12-16 12:44:21.263789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:14.187 [2024-12-16 12:44:21.263794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:14.187 [2024-12-16 12:44:21.263801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:14.187 [2024-12-16 12:44:21.263806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:14.187 [2024-12-16 12:44:21.263812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:14.187 [2024-12-16 12:44:21.263818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:14.187 [2024-12-16 12:44:21.263822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:14.187 [2024-12-16 12:44:21.263828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:14.187 [2024-12-16 12:44:21.263833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:14.187 [2024-12-16 12:44:21.263838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:14.187 [2024-12-16 12:44:21.263849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:14.187 [2024-12-16 12:44:21.263855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:14.187 [2024-12-16 12:44:21.263860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:14.187 [2024-12-16 12:44:21.263865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:14.187 [2024-12-16 12:44:21.263870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:14.187 [2024-12-16 12:44:21.263875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:14.187 [2024-12-16 12:44:21.263881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:14.187 [2024-12-16 12:44:21.263891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:14.187 [2024-12-16 12:44:21.263897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:14.187 [2024-12-16 12:44:21.263902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:14.187 [2024-12-16 12:44:21.263907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:14.187 [2024-12-16 12:44:21.263912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:14.187 [2024-12-16 12:44:21.263917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:14.187 [2024-12-16 12:44:21.263922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:14.187 [2024-12-16 12:44:21.263927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:14.187 [2024-12-16 12:44:21.263932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:14.187 [2024-12-16 12:44:21.263937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:14.187 [2024-12-16 12:44:21.263943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:14.187 [2024-12-16 12:44:21.263947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:14.187 [2024-12-16 12:44:21.263952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:14.188 [2024-12-16 12:44:21.263957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:14.188 [2024-12-16 12:44:21.263962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:14.188 [2024-12-16 12:44:21.263967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:14.188 [2024-12-16 12:44:21.263972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:14.188 [2024-12-16 12:44:21.263978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:14.188 [2024-12-16 12:44:21.263983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:14.188 [2024-12-16 12:44:21.263988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:14.188 [2024-12-16 12:44:21.263993] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:14.188 [2024-12-16 12:44:21.264000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:14.188 [2024-12-16 12:44:21.264005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:14.188 [2024-12-16 12:44:21.264011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:14.188 [2024-12-16 12:44:21.264019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:14.188 [2024-12-16 12:44:21.264026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:14.188 [2024-12-16 12:44:21.264031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:14.188 [2024-12-16 12:44:21.264036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:14.188 [2024-12-16 12:44:21.264041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:14.188 [2024-12-16 12:44:21.264046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:14.188 [2024-12-16 12:44:21.264053] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:14.188 [2024-12-16 12:44:21.264061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:14.188 [2024-12-16 12:44:21.264067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:14.188 [2024-12-16 12:44:21.264072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:14.188 [2024-12-16 12:44:21.264078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:14.188 [2024-12-16 12:44:21.264083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:14.188 [2024-12-16 12:44:21.264089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:14.188 [2024-12-16 12:44:21.264094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:14.188 [2024-12-16 12:44:21.264099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:14.188 [2024-12-16 12:44:21.264106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:14.188 [2024-12-16 12:44:21.264111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:14.188 [2024-12-16 12:44:21.264116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:14.188 [2024-12-16 12:44:21.264121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:14.188 [2024-12-16 12:44:21.264126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:14.188 [2024-12-16 12:44:21.264131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:14.188 [2024-12-16 12:44:21.264137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:14.188 [2024-12-16 12:44:21.264142] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:14.188 [2024-12-16 12:44:21.264148] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:14.188 [2024-12-16 12:44:21.264403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:14.188 [2024-12-16 12:44:21.264448] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:14.188 [2024-12-16 12:44:21.264476] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:14.188 [2024-12-16 12:44:21.264502] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:14.188 [2024-12-16 12:44:21.264530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.188 [2024-12-16 12:44:21.264545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:14.188 [2024-12-16 12:44:21.264565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.831 ms 00:32:14.188 [2024-12-16 12:44:21.264579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.188 [2024-12-16 12:44:21.264653] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:32:14.188 [2024-12-16 12:44:21.264687] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:32:18.394 [2024-12-16 12:44:24.684604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.394 [2024-12-16 12:44:24.684642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:32:18.394 [2024-12-16 12:44:24.684653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3419.940 ms 00:32:18.394 [2024-12-16 12:44:24.684660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.394 [2024-12-16 12:44:24.707849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.394 [2024-12-16 12:44:24.707882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:18.394 [2024-12-16 12:44:24.707892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.018 ms 00:32:18.394 [2024-12-16 12:44:24.707899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.394 [2024-12-16 12:44:24.707962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.394 [2024-12-16 12:44:24.707974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:18.394 [2024-12-16 12:44:24.707981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:32:18.394 [2024-12-16 12:44:24.707988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.394 [2024-12-16 12:44:24.734597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.394 [2024-12-16 12:44:24.734753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:18.394 [2024-12-16 12:44:24.734767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.580 ms 00:32:18.394 [2024-12-16 12:44:24.734779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.394 [2024-12-16 12:44:24.734804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.394 [2024-12-16 12:44:24.734811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:18.394 [2024-12-16 12:44:24.734818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:18.394 [2024-12-16 12:44:24.734824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.394 [2024-12-16 12:44:24.735261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.394 [2024-12-16 12:44:24.735275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:18.394 [2024-12-16 12:44:24.735283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.396 ms 00:32:18.394 [2024-12-16 12:44:24.735289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.394 [2024-12-16 12:44:24.735326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.394 [2024-12-16 12:44:24.735334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:18.394 [2024-12-16 12:44:24.735341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:32:18.394 [2024-12-16 12:44:24.735347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.394 [2024-12-16 12:44:24.748666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.394 [2024-12-16 12:44:24.748691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:18.394 [2024-12-16 12:44:24.748699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.301 ms 00:32:18.394 [2024-12-16 12:44:24.748706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.394 [2024-12-16 12:44:24.772362] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:32:18.394 [2024-12-16 12:44:24.772550] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:32:18.394 [2024-12-16 12:44:24.772574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.394 [2024-12-16 12:44:24.772589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:32:18.394 [2024-12-16 12:44:24.772603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.756 ms 00:32:18.394 [2024-12-16 12:44:24.772614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.394 [2024-12-16 12:44:24.783735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.394 [2024-12-16 12:44:24.783844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:32:18.394 [2024-12-16 12:44:24.783857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.069 ms 00:32:18.394 [2024-12-16 12:44:24.783864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.394 [2024-12-16 12:44:24.793067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.394 [2024-12-16 12:44:24.793092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:32:18.394 [2024-12-16 12:44:24.793100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.170 ms 00:32:18.394 [2024-12-16 12:44:24.793106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.394 [2024-12-16 12:44:24.802234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.394 [2024-12-16 12:44:24.802335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:32:18.394 [2024-12-16 12:44:24.802347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.095 ms 00:32:18.394 [2024-12-16 12:44:24.802352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.394 [2024-12-16 12:44:24.802830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.394 [2024-12-16 12:44:24.802848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:18.394 [2024-12-16 12:44:24.802856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.416 ms 00:32:18.394 [2024-12-16 12:44:24.802862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.394 [2024-12-16 12:44:24.851328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.394 [2024-12-16 12:44:24.851361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:32:18.394 [2024-12-16 12:44:24.851372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.450 ms 00:32:18.394 [2024-12-16 12:44:24.851379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.394 [2024-12-16 12:44:24.859743] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:18.394 [2024-12-16 12:44:24.860548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.394 [2024-12-16 12:44:24.860572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:18.394 [2024-12-16 12:44:24.860581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.133 ms 00:32:18.394 [2024-12-16 12:44:24.860588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.394 [2024-12-16 12:44:24.860659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.394 [2024-12-16 12:44:24.860670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:32:18.394 [2024-12-16 12:44:24.860677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:32:18.394 [2024-12-16 12:44:24.860683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.394 [2024-12-16 12:44:24.860721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.394 [2024-12-16 12:44:24.860729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:18.394 [2024-12-16 12:44:24.860735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:32:18.394 [2024-12-16 12:44:24.860742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.394 [2024-12-16 12:44:24.860760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.394 [2024-12-16 12:44:24.860767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:18.394 [2024-12-16 12:44:24.860776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:18.394 [2024-12-16 12:44:24.860783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.394 [2024-12-16 12:44:24.860813] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:32:18.394 [2024-12-16 12:44:24.860821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.394 [2024-12-16 12:44:24.860828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:32:18.394 [2024-12-16 12:44:24.860834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:32:18.394 [2024-12-16 12:44:24.860840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.395 [2024-12-16 12:44:24.879558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.395 [2024-12-16 12:44:24.879588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:32:18.395 [2024-12-16 12:44:24.879597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.703 ms 00:32:18.395 [2024-12-16 12:44:24.879604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.395 [2024-12-16 12:44:24.879662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.395 [2024-12-16 12:44:24.879669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:18.395 [2024-12-16 12:44:24.879677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:32:18.395 [2024-12-16 12:44:24.879683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.395 [2024-12-16 12:44:24.880596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3639.453 ms, result 0 00:32:18.395 [2024-12-16 12:44:24.895845] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:18.395 [2024-12-16 12:44:24.911843] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:18.395 [2024-12-16 12:44:24.919987] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:18.395 12:44:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:18.395 12:44:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:18.395 12:44:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:18.395 12:44:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:32:18.395 12:44:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:18.395 [2024-12-16 12:44:25.147998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.395 [2024-12-16 12:44:25.148030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:18.395 [2024-12-16 12:44:25.148044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:18.395 [2024-12-16 12:44:25.148052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.395 [2024-12-16 12:44:25.148070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.395 [2024-12-16 12:44:25.148078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:18.395 [2024-12-16 12:44:25.148086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:18.395 [2024-12-16 12:44:25.148093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.395 [2024-12-16 12:44:25.148109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.395 [2024-12-16 12:44:25.148115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:18.395 [2024-12-16 12:44:25.148122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:18.395 [2024-12-16 12:44:25.148128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.395 [2024-12-16 12:44:25.148187] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.170 ms, result 0 00:32:18.395 true 00:32:18.395 12:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:18.395 { 00:32:18.395 "name": "ftl", 00:32:18.395 "properties": [ 00:32:18.395 { 00:32:18.395 "name": "superblock_version", 00:32:18.395 "value": 5, 00:32:18.395 "read-only": true 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "name": "base_device", 00:32:18.395 "bands": [ 00:32:18.395 { 00:32:18.395 "id": 0, 00:32:18.395 "state": "CLOSED", 00:32:18.395 "validity": 1.0 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "id": 1, 00:32:18.395 "state": "CLOSED", 00:32:18.395 "validity": 1.0 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "id": 2, 00:32:18.395 "state": "CLOSED", 00:32:18.395 "validity": 0.007843137254901933 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "id": 3, 00:32:18.395 "state": "FREE", 00:32:18.395 "validity": 0.0 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "id": 4, 00:32:18.395 "state": "FREE", 00:32:18.395 "validity": 0.0 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "id": 5, 00:32:18.395 "state": "FREE", 00:32:18.395 "validity": 0.0 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "id": 6, 00:32:18.395 "state": "FREE", 00:32:18.395 "validity": 0.0 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "id": 7, 00:32:18.395 "state": "FREE", 00:32:18.395 "validity": 0.0 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "id": 8, 00:32:18.395 "state": "FREE", 00:32:18.395 "validity": 0.0 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "id": 9, 00:32:18.395 "state": "FREE", 00:32:18.395 "validity": 0.0 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "id": 10, 00:32:18.395 "state": "FREE", 00:32:18.395 "validity": 0.0 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "id": 11, 00:32:18.395 "state": "FREE", 00:32:18.395 "validity": 0.0 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "id": 12, 00:32:18.395 "state": "FREE", 00:32:18.395 "validity": 0.0 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "id": 13, 00:32:18.395 "state": "FREE", 00:32:18.395 "validity": 0.0 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "id": 14, 00:32:18.395 "state": "FREE", 00:32:18.395 "validity": 0.0 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "id": 15, 00:32:18.395 "state": "FREE", 00:32:18.395 "validity": 0.0 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "id": 16, 00:32:18.395 "state": "FREE", 00:32:18.395 "validity": 0.0 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "id": 17, 00:32:18.395 "state": "FREE", 00:32:18.395 "validity": 0.0 00:32:18.395 } 00:32:18.395 ], 00:32:18.395 "read-only": true 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "name": "cache_device", 00:32:18.395 "type": "bdev", 00:32:18.395 "chunks": [ 00:32:18.395 { 00:32:18.395 "id": 0, 00:32:18.395 "state": "INACTIVE", 00:32:18.395 "utilization": 0.0 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "id": 1, 00:32:18.395 "state": "OPEN", 00:32:18.395 "utilization": 0.0 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "id": 2, 00:32:18.395 "state": "OPEN", 00:32:18.395 "utilization": 0.0 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "id": 3, 00:32:18.395 "state": "FREE", 00:32:18.395 "utilization": 0.0 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "id": 4, 00:32:18.395 "state": "FREE", 00:32:18.395 "utilization": 0.0 00:32:18.395 } 00:32:18.395 ], 00:32:18.395 "read-only": true 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "name": "verbose_mode", 00:32:18.395 "value": true, 00:32:18.395 "unit": "", 00:32:18.395 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:18.395 }, 00:32:18.395 { 00:32:18.395 "name": "prep_upgrade_on_shutdown", 00:32:18.395 "value": false, 00:32:18.395 "unit": "", 00:32:18.395 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:18.395 } 00:32:18.395 ] 00:32:18.395 } 00:32:18.395 12:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:32:18.395 12:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:18.395 12:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:32:18.656 12:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:32:18.656 12:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:32:18.656 12:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:32:18.656 12:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:32:18.656 12:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:18.917 Validate MD5 checksum, iteration 1 00:32:18.917 12:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:32:18.917 12:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:32:18.917 12:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:32:18.917 12:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:32:18.917 12:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:32:18.917 12:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:18.917 12:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:32:18.917 12:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:18.917 12:44:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:18.917 12:44:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:18.917 12:44:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:18.917 12:44:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:18.917 12:44:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:18.917 [2024-12-16 12:44:25.854309] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:32:18.917 [2024-12-16 12:44:25.854422] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86501 ] 00:32:18.917 [2024-12-16 12:44:26.015370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.177 [2024-12-16 12:44:26.112859] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:20.565  [2024-12-16T12:44:28.616Z] Copying: 604/1024 [MB] (604 MBps) [2024-12-16T12:44:29.560Z] Copying: 1024/1024 [MB] (average 608 MBps) 00:32:22.454 00:32:22.454 12:44:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:32:22.454 12:44:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:25.002 12:44:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:25.002 Validate MD5 checksum, iteration 2 00:32:25.002 12:44:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b7bfa0b9e9eb59bc4ff377d65c2e8a83 00:32:25.002 12:44:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b7bfa0b9e9eb59bc4ff377d65c2e8a83 != \b\7\b\f\a\0\b\9\e\9\e\b\5\9\b\c\4\f\f\3\7\7\d\6\5\c\2\e\8\a\8\3 ]] 00:32:25.002 12:44:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:25.002 12:44:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:25.002 12:44:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:32:25.002 12:44:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:25.002 12:44:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:25.002 12:44:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:25.002 12:44:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:25.002 12:44:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:25.002 12:44:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:25.002 [2024-12-16 12:44:31.733488] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:32:25.002 [2024-12-16 12:44:31.733601] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86558 ] 00:32:25.002 [2024-12-16 12:44:31.893741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.002 [2024-12-16 12:44:31.990180] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:26.919  [2024-12-16T12:44:34.285Z] Copying: 640/1024 [MB] (640 MBps) [2024-12-16T12:44:35.237Z] Copying: 1024/1024 [MB] (average 637 MBps) 00:32:28.131 00:32:28.131 12:44:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:32:28.131 12:44:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:30.717 12:44:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:30.717 12:44:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=8427733ba994dda3767b2cf5d65796fd 00:32:30.717 12:44:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 8427733ba994dda3767b2cf5d65796fd != \8\4\2\7\7\3\3\b\a\9\9\4\d\d\a\3\7\6\7\b\2\c\f\5\d\6\5\7\9\6\f\d ]] 00:32:30.717 12:44:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:30.717 12:44:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:30.717 12:44:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:32:30.717 12:44:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 86421 ]] 00:32:30.717 12:44:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 86421 00:32:30.717 12:44:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:32:30.717 12:44:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:32:30.717 12:44:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:30.717 12:44:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:30.717 12:44:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:30.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:30.717 12:44:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86625 00:32:30.717 12:44:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:30.717 12:44:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:30.717 12:44:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86625 00:32:30.717 12:44:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 86625 ']' 00:32:30.717 12:44:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.717 12:44:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:30.717 12:44:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.717 12:44:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:30.717 12:44:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:30.717 [2024-12-16 12:44:37.417715] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:32:30.717 [2024-12-16 12:44:37.417832] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86625 ] 00:32:30.717 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 86421 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:32:30.717 [2024-12-16 12:44:37.567397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.717 [2024-12-16 12:44:37.656021] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.289 [2024-12-16 12:44:38.290171] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:31.289 [2024-12-16 12:44:38.290232] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:31.553 [2024-12-16 12:44:38.438815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.553 [2024-12-16 12:44:38.438848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:31.553 [2024-12-16 12:44:38.438861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:31.553 [2024-12-16 12:44:38.438867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.553 [2024-12-16 12:44:38.438913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.553 [2024-12-16 12:44:38.438922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:31.553 [2024-12-16 12:44:38.438928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:32:31.553 [2024-12-16 12:44:38.438935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.553 [2024-12-16 12:44:38.438954] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:31.553 [2024-12-16 12:44:38.439542] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:31.553 [2024-12-16 12:44:38.439557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.553 [2024-12-16 12:44:38.439564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:31.553 [2024-12-16 12:44:38.439571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.610 ms 00:32:31.553 [2024-12-16 12:44:38.439576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.553 [2024-12-16 12:44:38.439762] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:32:31.553 [2024-12-16 12:44:38.453696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.553 [2024-12-16 12:44:38.453725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:32:31.553 [2024-12-16 12:44:38.453735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.934 ms 00:32:31.553 [2024-12-16 12:44:38.453741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.553 [2024-12-16 12:44:38.460789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.553 [2024-12-16 12:44:38.460815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:32:31.553 [2024-12-16 12:44:38.460823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:32:31.553 [2024-12-16 12:44:38.460830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.553 [2024-12-16 12:44:38.461082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.553 [2024-12-16 12:44:38.461095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:31.553 [2024-12-16 12:44:38.461103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.192 ms 00:32:31.553 [2024-12-16 12:44:38.461109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.553 [2024-12-16 12:44:38.461150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.553 [2024-12-16 12:44:38.461171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:31.553 [2024-12-16 12:44:38.461178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:32:31.553 [2024-12-16 12:44:38.461184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.553 [2024-12-16 12:44:38.461205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.553 [2024-12-16 12:44:38.461213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:31.553 [2024-12-16 12:44:38.461219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:31.553 [2024-12-16 12:44:38.461225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.553 [2024-12-16 12:44:38.461241] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:31.553 [2024-12-16 12:44:38.463623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.553 [2024-12-16 12:44:38.463645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:31.553 [2024-12-16 12:44:38.463652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.386 ms 00:32:31.553 [2024-12-16 12:44:38.463658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.553 [2024-12-16 12:44:38.463687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.553 [2024-12-16 12:44:38.463694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:31.553 [2024-12-16 12:44:38.463700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:31.553 [2024-12-16 12:44:38.463706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.553 [2024-12-16 12:44:38.463722] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:32:31.553 [2024-12-16 12:44:38.463739] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:32:31.553 [2024-12-16 12:44:38.463766] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:32:31.553 [2024-12-16 12:44:38.463780] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:32:31.553 [2024-12-16 12:44:38.463862] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:31.553 [2024-12-16 12:44:38.463871] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:31.553 [2024-12-16 12:44:38.463879] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:31.553 [2024-12-16 12:44:38.463887] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:31.553 [2024-12-16 12:44:38.463895] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:31.553 [2024-12-16 12:44:38.463901] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:31.553 [2024-12-16 12:44:38.463908] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:31.553 [2024-12-16 12:44:38.463913] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:31.553 [2024-12-16 12:44:38.463919] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:31.553 [2024-12-16 12:44:38.463924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.553 [2024-12-16 12:44:38.463932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:31.553 [2024-12-16 12:44:38.463938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.205 ms 00:32:31.553 [2024-12-16 12:44:38.463944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.553 [2024-12-16 12:44:38.464008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.553 [2024-12-16 12:44:38.464015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:31.553 [2024-12-16 12:44:38.464020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:32:31.553 [2024-12-16 12:44:38.464026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.553 [2024-12-16 12:44:38.464102] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:31.553 [2024-12-16 12:44:38.464110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:31.553 [2024-12-16 12:44:38.464118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:31.553 [2024-12-16 12:44:38.464125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:31.553 [2024-12-16 12:44:38.464132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:31.553 [2024-12-16 12:44:38.464137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:31.553 [2024-12-16 12:44:38.464142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:31.553 [2024-12-16 12:44:38.464148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:31.553 [2024-12-16 12:44:38.464154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:31.553 [2024-12-16 12:44:38.464172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:31.553 [2024-12-16 12:44:38.464178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:31.553 [2024-12-16 12:44:38.464185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:31.553 [2024-12-16 12:44:38.464191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:31.553 [2024-12-16 12:44:38.464196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:31.553 [2024-12-16 12:44:38.464202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:31.553 [2024-12-16 12:44:38.464207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:31.553 [2024-12-16 12:44:38.464212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:31.553 [2024-12-16 12:44:38.464217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:31.553 [2024-12-16 12:44:38.464223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:31.553 [2024-12-16 12:44:38.464228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:31.553 [2024-12-16 12:44:38.464234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:31.553 [2024-12-16 12:44:38.464243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:31.553 [2024-12-16 12:44:38.464249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:31.553 [2024-12-16 12:44:38.464254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:31.553 [2024-12-16 12:44:38.464259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:31.553 [2024-12-16 12:44:38.464265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:31.553 [2024-12-16 12:44:38.464270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:31.553 [2024-12-16 12:44:38.464275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:31.553 [2024-12-16 12:44:38.464280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:31.553 [2024-12-16 12:44:38.464285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:31.553 [2024-12-16 12:44:38.464290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:31.553 [2024-12-16 12:44:38.464295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:31.554 [2024-12-16 12:44:38.464300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:31.554 [2024-12-16 12:44:38.464304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:31.554 [2024-12-16 12:44:38.464310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:31.554 [2024-12-16 12:44:38.464315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:31.554 [2024-12-16 12:44:38.464320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:31.554 [2024-12-16 12:44:38.464325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:31.554 [2024-12-16 12:44:38.464329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:31.554 [2024-12-16 12:44:38.464334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:31.554 [2024-12-16 12:44:38.464339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:31.554 [2024-12-16 12:44:38.464344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:31.554 [2024-12-16 12:44:38.464349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:31.554 [2024-12-16 12:44:38.464357] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:31.554 [2024-12-16 12:44:38.464363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:31.554 [2024-12-16 12:44:38.464369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:31.554 [2024-12-16 12:44:38.464375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:31.554 [2024-12-16 12:44:38.464381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:31.554 [2024-12-16 12:44:38.464386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:31.554 [2024-12-16 12:44:38.464392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:31.554 [2024-12-16 12:44:38.464397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:31.554 [2024-12-16 12:44:38.464402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:31.554 [2024-12-16 12:44:38.464406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:31.554 [2024-12-16 12:44:38.464413] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:31.554 [2024-12-16 12:44:38.464420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:31.554 [2024-12-16 12:44:38.464426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:31.554 [2024-12-16 12:44:38.464432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:31.554 [2024-12-16 12:44:38.464438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:31.554 [2024-12-16 12:44:38.464444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:31.554 [2024-12-16 12:44:38.464449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:31.554 [2024-12-16 12:44:38.464454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:31.554 [2024-12-16 12:44:38.464470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:31.554 [2024-12-16 12:44:38.464476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:31.554 [2024-12-16 12:44:38.464482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:31.554 [2024-12-16 12:44:38.464487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:31.554 [2024-12-16 12:44:38.464493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:31.554 [2024-12-16 12:44:38.464498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:31.554 [2024-12-16 12:44:38.464503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:31.554 [2024-12-16 12:44:38.464509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:31.554 [2024-12-16 12:44:38.464514] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:31.554 [2024-12-16 12:44:38.464520] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:31.554 [2024-12-16 12:44:38.464529] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:31.554 [2024-12-16 12:44:38.464535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:31.554 [2024-12-16 12:44:38.464540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:31.554 [2024-12-16 12:44:38.464545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:31.554 [2024-12-16 12:44:38.464556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.554 [2024-12-16 12:44:38.464562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:31.554 [2024-12-16 12:44:38.464568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.507 ms 00:32:31.554 [2024-12-16 12:44:38.464573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.554 [2024-12-16 12:44:38.485982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.554 [2024-12-16 12:44:38.486008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:31.554 [2024-12-16 12:44:38.486018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.370 ms 00:32:31.554 [2024-12-16 12:44:38.486024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.554 [2024-12-16 12:44:38.486055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.554 [2024-12-16 12:44:38.486062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:31.554 [2024-12-16 12:44:38.486069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:32:31.554 [2024-12-16 12:44:38.486075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.554 [2024-12-16 12:44:38.512369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.554 [2024-12-16 12:44:38.512394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:31.554 [2024-12-16 12:44:38.512402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.255 ms 00:32:31.554 [2024-12-16 12:44:38.512409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.554 [2024-12-16 12:44:38.512431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.554 [2024-12-16 12:44:38.512437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:31.554 [2024-12-16 12:44:38.512444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:31.554 [2024-12-16 12:44:38.512452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.554 [2024-12-16 12:44:38.512523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.554 [2024-12-16 12:44:38.512531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:31.554 [2024-12-16 12:44:38.512537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:32:31.554 [2024-12-16 12:44:38.512543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.554 [2024-12-16 12:44:38.512577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.554 [2024-12-16 12:44:38.512584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:31.554 [2024-12-16 12:44:38.512591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:32:31.554 [2024-12-16 12:44:38.512596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.554 [2024-12-16 12:44:38.525817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.554 [2024-12-16 12:44:38.525843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:31.554 [2024-12-16 12:44:38.525850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.203 ms 00:32:31.554 [2024-12-16 12:44:38.525856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.554 [2024-12-16 12:44:38.525935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.554 [2024-12-16 12:44:38.525944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:32:31.554 [2024-12-16 12:44:38.525950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:31.554 [2024-12-16 12:44:38.525956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.554 [2024-12-16 12:44:38.559074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.554 [2024-12-16 12:44:38.559244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:32:31.554 [2024-12-16 12:44:38.559260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.104 ms 00:32:31.554 [2024-12-16 12:44:38.559267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.554 [2024-12-16 12:44:38.566614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.554 [2024-12-16 12:44:38.566703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:31.554 [2024-12-16 12:44:38.566724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.398 ms 00:32:31.554 [2024-12-16 12:44:38.566730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.554 [2024-12-16 12:44:38.614207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.554 [2024-12-16 12:44:38.614243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:32:31.554 [2024-12-16 12:44:38.614252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.433 ms 00:32:31.554 [2024-12-16 12:44:38.614260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.554 [2024-12-16 12:44:38.614386] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:32:31.554 [2024-12-16 12:44:38.614490] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:32:31.554 [2024-12-16 12:44:38.614587] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:32:31.554 [2024-12-16 12:44:38.614687] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:32:31.554 [2024-12-16 12:44:38.614695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.554 [2024-12-16 12:44:38.614702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:32:31.554 [2024-12-16 12:44:38.614709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.402 ms 00:32:31.554 [2024-12-16 12:44:38.614715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.554 [2024-12-16 12:44:38.614756] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:32:31.554 [2024-12-16 12:44:38.614766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.554 [2024-12-16 12:44:38.614775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:32:31.554 [2024-12-16 12:44:38.614782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:32:31.555 [2024-12-16 12:44:38.614789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.555 [2024-12-16 12:44:38.627123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.555 [2024-12-16 12:44:38.627167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:32:31.555 [2024-12-16 12:44:38.627176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.317 ms 00:32:31.555 [2024-12-16 12:44:38.627182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.555 [2024-12-16 12:44:38.633739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.555 [2024-12-16 12:44:38.633840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:32:31.555 [2024-12-16 12:44:38.633852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:32:31.555 [2024-12-16 12:44:38.633859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.555 [2024-12-16 12:44:38.633929] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:32:31.555 [2024-12-16 12:44:38.634082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.555 [2024-12-16 12:44:38.634092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:32:31.555 [2024-12-16 12:44:38.634100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.154 ms 00:32:31.555 [2024-12-16 12:44:38.634106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:32.500 [2024-12-16 12:44:39.585510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:32.500 [2024-12-16 12:44:39.585568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:32:32.500 [2024-12-16 12:44:39.585583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 950.794 ms 00:32:32.500 [2024-12-16 12:44:39.585595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:32.500 [2024-12-16 12:44:39.590649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:32.500 [2024-12-16 12:44:39.590698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:32:32.500 [2024-12-16 12:44:39.590712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.034 ms 00:32:32.500 [2024-12-16 12:44:39.590722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:32.500 [2024-12-16 12:44:39.591605] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:32:32.500 [2024-12-16 12:44:39.591646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:32.500 [2024-12-16 12:44:39.591656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:32:32.500 [2024-12-16 12:44:39.591667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.880 ms 00:32:32.501 [2024-12-16 12:44:39.591675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:32.501 [2024-12-16 12:44:39.591713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:32.501 [2024-12-16 12:44:39.591724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:32:32.501 [2024-12-16 12:44:39.591733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:32.501 [2024-12-16 12:44:39.591749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:32.501 [2024-12-16 12:44:39.591784] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 957.849 ms, result 0 00:32:32.501 [2024-12-16 12:44:39.591828] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:32:32.501 [2024-12-16 12:44:39.592089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:32.501 [2024-12-16 12:44:39.592105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:32:32.501 [2024-12-16 12:44:39.592114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.262 ms 00:32:32.501 [2024-12-16 12:44:39.592122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.446 [2024-12-16 12:44:40.501408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.446 [2024-12-16 12:44:40.501485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:32:33.446 [2024-12-16 12:44:40.501517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 908.310 ms 00:32:33.446 [2024-12-16 12:44:40.501526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.446 [2024-12-16 12:44:40.506284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.446 [2024-12-16 12:44:40.506335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:32:33.446 [2024-12-16 12:44:40.506347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.635 ms 00:32:33.446 [2024-12-16 12:44:40.506356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.446 [2024-12-16 12:44:40.507243] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:32:33.446 [2024-12-16 12:44:40.507291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.446 [2024-12-16 12:44:40.507300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:32:33.446 [2024-12-16 12:44:40.507311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.915 ms 00:32:33.446 [2024-12-16 12:44:40.507320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.446 [2024-12-16 12:44:40.507363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.446 [2024-12-16 12:44:40.507373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:32:33.446 [2024-12-16 12:44:40.507382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:33.446 [2024-12-16 12:44:40.507390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.446 [2024-12-16 12:44:40.507431] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 915.593 ms, result 0 00:32:33.446 [2024-12-16 12:44:40.507483] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:33.446 [2024-12-16 12:44:40.507496] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:32:33.446 [2024-12-16 12:44:40.507507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.446 [2024-12-16 12:44:40.507517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:32:33.446 [2024-12-16 12:44:40.507528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1873.592 ms 00:32:33.446 [2024-12-16 12:44:40.507537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.446 [2024-12-16 12:44:40.507568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.446 [2024-12-16 12:44:40.507584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:32:33.446 [2024-12-16 12:44:40.507594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:33.446 [2024-12-16 12:44:40.507601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.446 [2024-12-16 12:44:40.520979] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:33.446 [2024-12-16 12:44:40.521391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.446 [2024-12-16 12:44:40.521416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:33.446 [2024-12-16 12:44:40.521430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.771 ms 00:32:33.446 [2024-12-16 12:44:40.521440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.446 [2024-12-16 12:44:40.522206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.446 [2024-12-16 12:44:40.522235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:32:33.446 [2024-12-16 12:44:40.522250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.670 ms 00:32:33.446 [2024-12-16 12:44:40.522258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.446 [2024-12-16 12:44:40.524484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.446 [2024-12-16 12:44:40.524657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:32:33.446 [2024-12-16 12:44:40.524677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.207 ms 00:32:33.446 [2024-12-16 12:44:40.524686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.446 [2024-12-16 12:44:40.524739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.446 [2024-12-16 12:44:40.524749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:32:33.446 [2024-12-16 12:44:40.524759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:33.446 [2024-12-16 12:44:40.524774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.446 [2024-12-16 12:44:40.524893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.446 [2024-12-16 12:44:40.524906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:33.446 [2024-12-16 12:44:40.524914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:32:33.446 [2024-12-16 12:44:40.524923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.446 [2024-12-16 12:44:40.524945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.446 [2024-12-16 12:44:40.524954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:33.446 [2024-12-16 12:44:40.524962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:32:33.446 [2024-12-16 12:44:40.524971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.446 [2024-12-16 12:44:40.525018] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:32:33.446 [2024-12-16 12:44:40.525028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.446 [2024-12-16 12:44:40.525037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:32:33.446 [2024-12-16 12:44:40.525046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:32:33.446 [2024-12-16 12:44:40.525055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.446 [2024-12-16 12:44:40.525111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.446 [2024-12-16 12:44:40.525122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:33.446 [2024-12-16 12:44:40.525130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:32:33.446 [2024-12-16 12:44:40.525138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.446 [2024-12-16 12:44:40.527500] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2088.062 ms, result 0 00:32:33.446 [2024-12-16 12:44:40.542130] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:33.708 [2024-12-16 12:44:40.558135] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:33.708 [2024-12-16 12:44:40.567764] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:33.708 Validate MD5 checksum, iteration 1 00:32:33.708 12:44:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:33.708 12:44:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:33.708 12:44:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:33.708 12:44:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:32:33.708 12:44:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:32:33.708 12:44:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:32:33.708 12:44:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:32:33.708 12:44:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:33.708 12:44:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:32:33.708 12:44:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:33.708 12:44:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:33.708 12:44:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:33.708 12:44:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:33.708 12:44:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:33.708 12:44:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:33.708 [2024-12-16 12:44:40.683838] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:32:33.708 [2024-12-16 12:44:40.684178] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86666 ] 00:32:33.969 [2024-12-16 12:44:40.841380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.969 [2024-12-16 12:44:40.962726] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:35.886  [2024-12-16T12:44:43.253Z] Copying: 571/1024 [MB] (571 MBps) [2024-12-16T12:44:44.191Z] Copying: 1024/1024 [MB] (average 592 MBps) 00:32:37.085 00:32:37.085 12:44:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:32:37.085 12:44:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:39.631 12:44:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:39.631 12:44:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b7bfa0b9e9eb59bc4ff377d65c2e8a83 00:32:39.631 12:44:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b7bfa0b9e9eb59bc4ff377d65c2e8a83 != \b\7\b\f\a\0\b\9\e\9\e\b\5\9\b\c\4\f\f\3\7\7\d\6\5\c\2\e\8\a\8\3 ]] 00:32:39.631 12:44:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:39.631 12:44:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:39.631 12:44:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:32:39.631 Validate MD5 checksum, iteration 2 00:32:39.631 12:44:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:39.631 12:44:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:39.631 12:44:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:39.631 12:44:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:39.631 12:44:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:39.631 12:44:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:39.631 [2024-12-16 12:44:46.394590] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:32:39.632 [2024-12-16 12:44:46.394772] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86728 ] 00:32:39.632 [2024-12-16 12:44:46.543580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.632 [2024-12-16 12:44:46.619814] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:41.006  [2024-12-16T12:44:48.680Z] Copying: 683/1024 [MB] (683 MBps) [2024-12-16T12:44:49.248Z] Copying: 1024/1024 [MB] (average 676 MBps) 00:32:42.142 00:32:42.142 12:44:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:32:42.142 12:44:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=8427733ba994dda3767b2cf5d65796fd 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 8427733ba994dda3767b2cf5d65796fd != \8\4\2\7\7\3\3\b\a\9\9\4\d\d\a\3\7\6\7\b\2\c\f\5\d\6\5\7\9\6\f\d ]] 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 86625 ]] 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 86625 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 86625 ']' 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 86625 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86625 00:32:44.042 killing process with pid 86625 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86625' 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 86625 00:32:44.042 12:44:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 86625 00:32:44.615 [2024-12-16 12:44:51.552595] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:44.615 [2024-12-16 12:44:51.563509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:44.615 [2024-12-16 12:44:51.563543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:44.615 [2024-12-16 12:44:51.563555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:44.615 [2024-12-16 12:44:51.563563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.615 [2024-12-16 12:44:51.563582] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:44.615 [2024-12-16 12:44:51.565774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:44.615 [2024-12-16 12:44:51.565800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:44.615 [2024-12-16 12:44:51.565812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.181 ms 00:32:44.615 [2024-12-16 12:44:51.565820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.615 [2024-12-16 12:44:51.566011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:44.615 [2024-12-16 12:44:51.566021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:32:44.615 [2024-12-16 12:44:51.566028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.173 ms 00:32:44.615 [2024-12-16 12:44:51.566034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.615 [2024-12-16 12:44:51.567536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:44.615 [2024-12-16 12:44:51.567578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:32:44.615 [2024-12-16 12:44:51.567596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.489 ms 00:32:44.615 [2024-12-16 12:44:51.567616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.615 [2024-12-16 12:44:51.568489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:44.616 [2024-12-16 12:44:51.568570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:32:44.616 [2024-12-16 12:44:51.568615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.839 ms 00:32:44.616 [2024-12-16 12:44:51.568634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.616 [2024-12-16 12:44:51.576081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:44.616 [2024-12-16 12:44:51.576186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:32:44.616 [2024-12-16 12:44:51.576234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.408 ms 00:32:44.616 [2024-12-16 12:44:51.576257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.616 [2024-12-16 12:44:51.580509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:44.616 [2024-12-16 12:44:51.580601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:32:44.616 [2024-12-16 12:44:51.580705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.218 ms 00:32:44.616 [2024-12-16 12:44:51.580725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.616 [2024-12-16 12:44:51.580794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:44.616 [2024-12-16 12:44:51.580914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:32:44.616 [2024-12-16 12:44:51.580933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:32:44.616 [2024-12-16 12:44:51.580952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.616 [2024-12-16 12:44:51.588111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:44.616 [2024-12-16 12:44:51.588203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:32:44.616 [2024-12-16 12:44:51.588247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.137 ms 00:32:44.616 [2024-12-16 12:44:51.588264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.616 [2024-12-16 12:44:51.596365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:44.616 [2024-12-16 12:44:51.596444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:32:44.616 [2024-12-16 12:44:51.596480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.058 ms 00:32:44.616 [2024-12-16 12:44:51.596496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.616 [2024-12-16 12:44:51.604076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:44.616 [2024-12-16 12:44:51.604161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:32:44.616 [2024-12-16 12:44:51.604199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.549 ms 00:32:44.616 [2024-12-16 12:44:51.604216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.616 [2024-12-16 12:44:51.611573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:44.616 [2024-12-16 12:44:51.611650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:32:44.616 [2024-12-16 12:44:51.611688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.306 ms 00:32:44.616 [2024-12-16 12:44:51.611704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.616 [2024-12-16 12:44:51.611735] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:32:44.616 [2024-12-16 12:44:51.611756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:44.616 [2024-12-16 12:44:51.611781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:32:44.616 [2024-12-16 12:44:51.611803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:32:44.616 [2024-12-16 12:44:51.611825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:44.616 [2024-12-16 12:44:51.611873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:44.616 [2024-12-16 12:44:51.611895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:44.616 [2024-12-16 12:44:51.611939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:44.616 [2024-12-16 12:44:51.611964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:44.616 [2024-12-16 12:44:51.611986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:44.616 [2024-12-16 12:44:51.612177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:44.616 [2024-12-16 12:44:51.612185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:44.616 [2024-12-16 12:44:51.612191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:44.616 [2024-12-16 12:44:51.612197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:44.616 [2024-12-16 12:44:51.612203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:44.616 [2024-12-16 12:44:51.612209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:44.616 [2024-12-16 12:44:51.612215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:44.616 [2024-12-16 12:44:51.612221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:44.616 [2024-12-16 12:44:51.612227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:44.616 [2024-12-16 12:44:51.612235] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:32:44.616 [2024-12-16 12:44:51.612241] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 9dc1892b-efd8-429f-a5c2-6e8723f634fe 00:32:44.616 [2024-12-16 12:44:51.612247] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:32:44.616 [2024-12-16 12:44:51.612252] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:32:44.616 [2024-12-16 12:44:51.612258] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:32:44.616 [2024-12-16 12:44:51.612264] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:32:44.616 [2024-12-16 12:44:51.612269] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:32:44.616 [2024-12-16 12:44:51.612275] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:32:44.616 [2024-12-16 12:44:51.612283] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:32:44.616 [2024-12-16 12:44:51.612288] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:32:44.616 [2024-12-16 12:44:51.612294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:32:44.616 [2024-12-16 12:44:51.612300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:44.616 [2024-12-16 12:44:51.612311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:32:44.616 [2024-12-16 12:44:51.612318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.565 ms 00:32:44.616 [2024-12-16 12:44:51.612326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.616 [2024-12-16 12:44:51.622534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:44.616 [2024-12-16 12:44:51.622620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:32:44.616 [2024-12-16 12:44:51.622631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.185 ms 00:32:44.616 [2024-12-16 12:44:51.622637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.616 [2024-12-16 12:44:51.622931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:44.616 [2024-12-16 12:44:51.622939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:32:44.616 [2024-12-16 12:44:51.622946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.274 ms 00:32:44.616 [2024-12-16 12:44:51.622951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.616 [2024-12-16 12:44:51.657835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:44.616 [2024-12-16 12:44:51.657926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:44.616 [2024-12-16 12:44:51.657937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:44.616 [2024-12-16 12:44:51.657947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.616 [2024-12-16 12:44:51.657970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:44.616 [2024-12-16 12:44:51.657977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:44.616 [2024-12-16 12:44:51.657984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:44.616 [2024-12-16 12:44:51.657990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.616 [2024-12-16 12:44:51.658059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:44.616 [2024-12-16 12:44:51.658068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:44.616 [2024-12-16 12:44:51.658075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:44.616 [2024-12-16 12:44:51.658082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.616 [2024-12-16 12:44:51.658099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:44.616 [2024-12-16 12:44:51.658106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:44.616 [2024-12-16 12:44:51.658112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:44.616 [2024-12-16 12:44:51.658118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.878 [2024-12-16 12:44:51.721020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:44.878 [2024-12-16 12:44:51.721145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:44.878 [2024-12-16 12:44:51.721168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:44.878 [2024-12-16 12:44:51.721176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.878 [2024-12-16 12:44:51.772720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:44.878 [2024-12-16 12:44:51.772754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:44.878 [2024-12-16 12:44:51.772763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:44.878 [2024-12-16 12:44:51.772769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.878 [2024-12-16 12:44:51.772835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:44.878 [2024-12-16 12:44:51.772843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:44.878 [2024-12-16 12:44:51.772850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:44.878 [2024-12-16 12:44:51.772857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.878 [2024-12-16 12:44:51.772911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:44.878 [2024-12-16 12:44:51.772928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:44.878 [2024-12-16 12:44:51.772935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:44.878 [2024-12-16 12:44:51.772941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.878 [2024-12-16 12:44:51.773020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:44.878 [2024-12-16 12:44:51.773028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:44.878 [2024-12-16 12:44:51.773035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:44.878 [2024-12-16 12:44:51.773041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.878 [2024-12-16 12:44:51.773068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:44.878 [2024-12-16 12:44:51.773076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:32:44.878 [2024-12-16 12:44:51.773085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:44.878 [2024-12-16 12:44:51.773092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.878 [2024-12-16 12:44:51.773126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:44.878 [2024-12-16 12:44:51.773133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:44.878 [2024-12-16 12:44:51.773140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:44.878 [2024-12-16 12:44:51.773145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.878 [2024-12-16 12:44:51.773198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:44.878 [2024-12-16 12:44:51.773210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:44.878 [2024-12-16 12:44:51.773216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:44.878 [2024-12-16 12:44:51.773222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:44.878 [2024-12-16 12:44:51.773344] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 209.792 ms, result 0 00:32:45.449 12:44:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:45.449 12:44:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:45.449 12:44:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:32:45.449 12:44:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:32:45.449 12:44:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:32:45.449 12:44:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:45.449 Remove shared memory files 00:32:45.449 12:44:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:32:45.449 12:44:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:45.449 12:44:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:45.449 12:44:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:45.449 12:44:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid86421 00:32:45.449 12:44:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:45.449 12:44:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:45.449 ************************************ 00:32:45.449 END TEST ftl_upgrade_shutdown 00:32:45.449 ************************************ 00:32:45.449 00:32:45.449 real 1m19.969s 00:32:45.449 user 1m49.249s 00:32:45.449 sys 0m19.071s 00:32:45.449 12:44:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:45.449 12:44:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:45.449 12:44:52 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:32:45.449 12:44:52 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:32:45.449 12:44:52 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:32:45.449 12:44:52 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:45.449 12:44:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:45.449 ************************************ 00:32:45.449 START TEST ftl_restore_fast 00:32:45.449 ************************************ 00:32:45.449 12:44:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:32:45.711 * Looking for test storage... 00:32:45.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lcov --version 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:45.711 12:44:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:45.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.712 --rc genhtml_branch_coverage=1 00:32:45.712 --rc genhtml_function_coverage=1 00:32:45.712 --rc genhtml_legend=1 00:32:45.712 --rc geninfo_all_blocks=1 00:32:45.712 --rc geninfo_unexecuted_blocks=1 00:32:45.712 00:32:45.712 ' 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:45.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.712 --rc genhtml_branch_coverage=1 00:32:45.712 --rc genhtml_function_coverage=1 00:32:45.712 --rc genhtml_legend=1 00:32:45.712 --rc geninfo_all_blocks=1 00:32:45.712 --rc geninfo_unexecuted_blocks=1 00:32:45.712 00:32:45.712 ' 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:45.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.712 --rc genhtml_branch_coverage=1 00:32:45.712 --rc genhtml_function_coverage=1 00:32:45.712 --rc genhtml_legend=1 00:32:45.712 --rc geninfo_all_blocks=1 00:32:45.712 --rc geninfo_unexecuted_blocks=1 00:32:45.712 00:32:45.712 ' 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:45.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.712 --rc genhtml_branch_coverage=1 00:32:45.712 --rc genhtml_function_coverage=1 00:32:45.712 --rc genhtml_legend=1 00:32:45.712 --rc geninfo_all_blocks=1 00:32:45.712 --rc geninfo_unexecuted_blocks=1 00:32:45.712 00:32:45.712 ' 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.DC71hK0SCL 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=86873 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 86873 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 86873 ']' 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:45.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:45.712 12:44:52 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:32:45.712 [2024-12-16 12:44:52.768634] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:32:45.712 [2024-12-16 12:44:52.768860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86873 ] 00:32:45.973 [2024-12-16 12:44:52.919669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.973 [2024-12-16 12:44:53.005099] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.544 12:44:53 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:46.544 12:44:53 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:32:46.544 12:44:53 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:32:46.544 12:44:53 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:32:46.544 12:44:53 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:46.544 12:44:53 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:32:46.544 12:44:53 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:32:46.544 12:44:53 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:46.805 12:44:53 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:32:46.805 12:44:53 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:32:46.805 12:44:53 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:32:46.805 12:44:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:32:46.805 12:44:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:46.805 12:44:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:32:46.805 12:44:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:32:46.805 12:44:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:32:47.067 12:44:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:47.067 { 00:32:47.067 "name": "nvme0n1", 00:32:47.067 "aliases": [ 00:32:47.067 "3b093172-22de-49cd-a8a5-1d1289bdffaa" 00:32:47.067 ], 00:32:47.067 "product_name": "NVMe disk", 00:32:47.067 "block_size": 4096, 00:32:47.067 "num_blocks": 1310720, 00:32:47.067 "uuid": "3b093172-22de-49cd-a8a5-1d1289bdffaa", 00:32:47.067 "numa_id": -1, 00:32:47.067 "assigned_rate_limits": { 00:32:47.067 "rw_ios_per_sec": 0, 00:32:47.067 "rw_mbytes_per_sec": 0, 00:32:47.067 "r_mbytes_per_sec": 0, 00:32:47.067 "w_mbytes_per_sec": 0 00:32:47.067 }, 00:32:47.067 "claimed": true, 00:32:47.067 "claim_type": "read_many_write_one", 00:32:47.067 "zoned": false, 00:32:47.067 "supported_io_types": { 00:32:47.067 "read": true, 00:32:47.067 "write": true, 00:32:47.067 "unmap": true, 00:32:47.067 "flush": true, 00:32:47.067 "reset": true, 00:32:47.067 "nvme_admin": true, 00:32:47.067 "nvme_io": true, 00:32:47.067 "nvme_io_md": false, 00:32:47.067 "write_zeroes": true, 00:32:47.067 "zcopy": false, 00:32:47.067 "get_zone_info": false, 00:32:47.067 "zone_management": false, 00:32:47.067 "zone_append": false, 00:32:47.067 "compare": true, 00:32:47.067 "compare_and_write": false, 00:32:47.067 "abort": true, 00:32:47.067 "seek_hole": false, 00:32:47.067 "seek_data": false, 00:32:47.067 "copy": true, 00:32:47.067 "nvme_iov_md": false 00:32:47.067 }, 00:32:47.067 "driver_specific": { 00:32:47.067 "nvme": [ 00:32:47.067 { 00:32:47.067 "pci_address": "0000:00:11.0", 00:32:47.067 "trid": { 00:32:47.067 "trtype": "PCIe", 00:32:47.067 "traddr": "0000:00:11.0" 00:32:47.067 }, 00:32:47.067 "ctrlr_data": { 00:32:47.067 "cntlid": 0, 00:32:47.067 "vendor_id": "0x1b36", 00:32:47.067 "model_number": "QEMU NVMe Ctrl", 00:32:47.067 "serial_number": "12341", 00:32:47.067 "firmware_revision": "8.0.0", 00:32:47.067 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:47.067 "oacs": { 00:32:47.067 "security": 0, 00:32:47.067 "format": 1, 00:32:47.067 "firmware": 0, 00:32:47.067 "ns_manage": 1 00:32:47.067 }, 00:32:47.067 "multi_ctrlr": false, 00:32:47.067 "ana_reporting": false 00:32:47.067 }, 00:32:47.067 "vs": { 00:32:47.067 "nvme_version": "1.4" 00:32:47.067 }, 00:32:47.067 "ns_data": { 00:32:47.067 "id": 1, 00:32:47.067 "can_share": false 00:32:47.067 } 00:32:47.067 } 00:32:47.067 ], 00:32:47.067 "mp_policy": "active_passive" 00:32:47.067 } 00:32:47.067 } 00:32:47.067 ]' 00:32:47.067 12:44:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:47.067 12:44:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:32:47.067 12:44:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:47.067 12:44:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:32:47.067 12:44:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:32:47.067 12:44:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:32:47.067 12:44:54 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:32:47.067 12:44:54 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:32:47.067 12:44:54 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:32:47.067 12:44:54 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:47.067 12:44:54 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:47.327 12:44:54 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=0c699565-84da-4816-9856-9e026b7d50d6 00:32:47.328 12:44:54 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:32:47.328 12:44:54 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0c699565-84da-4816-9856-9e026b7d50d6 00:32:47.589 12:44:54 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:32:47.849 12:44:54 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=12c14c6f-cdf9-42af-8116-0e68eee60646 00:32:47.850 12:44:54 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 12c14c6f-cdf9-42af-8116-0e68eee60646 00:32:48.111 12:44:55 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=7943e7bd-4b11-4fed-a520-cc80f9cf1fe0 00:32:48.111 12:44:55 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:32:48.111 12:44:55 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7943e7bd-4b11-4fed-a520-cc80f9cf1fe0 00:32:48.111 12:44:55 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:32:48.111 12:44:55 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:48.111 12:44:55 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=7943e7bd-4b11-4fed-a520-cc80f9cf1fe0 00:32:48.111 12:44:55 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:32:48.111 12:44:55 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 7943e7bd-4b11-4fed-a520-cc80f9cf1fe0 00:32:48.111 12:44:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=7943e7bd-4b11-4fed-a520-cc80f9cf1fe0 00:32:48.111 12:44:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:48.111 12:44:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:32:48.111 12:44:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:32:48.111 12:44:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7943e7bd-4b11-4fed-a520-cc80f9cf1fe0 00:32:48.371 12:44:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:48.371 { 00:32:48.371 "name": "7943e7bd-4b11-4fed-a520-cc80f9cf1fe0", 00:32:48.371 "aliases": [ 00:32:48.371 "lvs/nvme0n1p0" 00:32:48.371 ], 00:32:48.371 "product_name": "Logical Volume", 00:32:48.371 "block_size": 4096, 00:32:48.371 "num_blocks": 26476544, 00:32:48.371 "uuid": "7943e7bd-4b11-4fed-a520-cc80f9cf1fe0", 00:32:48.371 "assigned_rate_limits": { 00:32:48.371 "rw_ios_per_sec": 0, 00:32:48.371 "rw_mbytes_per_sec": 0, 00:32:48.371 "r_mbytes_per_sec": 0, 00:32:48.371 "w_mbytes_per_sec": 0 00:32:48.371 }, 00:32:48.371 "claimed": false, 00:32:48.371 "zoned": false, 00:32:48.371 "supported_io_types": { 00:32:48.371 "read": true, 00:32:48.371 "write": true, 00:32:48.371 "unmap": true, 00:32:48.371 "flush": false, 00:32:48.371 "reset": true, 00:32:48.371 "nvme_admin": false, 00:32:48.371 "nvme_io": false, 00:32:48.371 "nvme_io_md": false, 00:32:48.371 "write_zeroes": true, 00:32:48.371 "zcopy": false, 00:32:48.371 "get_zone_info": false, 00:32:48.371 "zone_management": false, 00:32:48.371 "zone_append": false, 00:32:48.371 "compare": false, 00:32:48.371 "compare_and_write": false, 00:32:48.371 "abort": false, 00:32:48.371 "seek_hole": true, 00:32:48.371 "seek_data": true, 00:32:48.371 "copy": false, 00:32:48.371 "nvme_iov_md": false 00:32:48.371 }, 00:32:48.371 "driver_specific": { 00:32:48.371 "lvol": { 00:32:48.371 "lvol_store_uuid": "12c14c6f-cdf9-42af-8116-0e68eee60646", 00:32:48.371 "base_bdev": "nvme0n1", 00:32:48.371 "thin_provision": true, 00:32:48.371 "num_allocated_clusters": 0, 00:32:48.371 "snapshot": false, 00:32:48.371 "clone": false, 00:32:48.371 "esnap_clone": false 00:32:48.371 } 00:32:48.371 } 00:32:48.371 } 00:32:48.371 ]' 00:32:48.371 12:44:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:48.371 12:44:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:32:48.371 12:44:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:48.371 12:44:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:48.371 12:44:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:48.371 12:44:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:32:48.371 12:44:55 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:32:48.371 12:44:55 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:32:48.371 12:44:55 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:32:48.632 12:44:55 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:32:48.632 12:44:55 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:32:48.632 12:44:55 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 7943e7bd-4b11-4fed-a520-cc80f9cf1fe0 00:32:48.632 12:44:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=7943e7bd-4b11-4fed-a520-cc80f9cf1fe0 00:32:48.632 12:44:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:48.632 12:44:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:32:48.632 12:44:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:32:48.632 12:44:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7943e7bd-4b11-4fed-a520-cc80f9cf1fe0 00:32:48.893 12:44:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:48.893 { 00:32:48.893 "name": "7943e7bd-4b11-4fed-a520-cc80f9cf1fe0", 00:32:48.893 "aliases": [ 00:32:48.893 "lvs/nvme0n1p0" 00:32:48.893 ], 00:32:48.893 "product_name": "Logical Volume", 00:32:48.893 "block_size": 4096, 00:32:48.893 "num_blocks": 26476544, 00:32:48.893 "uuid": "7943e7bd-4b11-4fed-a520-cc80f9cf1fe0", 00:32:48.893 "assigned_rate_limits": { 00:32:48.893 "rw_ios_per_sec": 0, 00:32:48.893 "rw_mbytes_per_sec": 0, 00:32:48.893 "r_mbytes_per_sec": 0, 00:32:48.893 "w_mbytes_per_sec": 0 00:32:48.893 }, 00:32:48.893 "claimed": false, 00:32:48.893 "zoned": false, 00:32:48.893 "supported_io_types": { 00:32:48.893 "read": true, 00:32:48.893 "write": true, 00:32:48.893 "unmap": true, 00:32:48.893 "flush": false, 00:32:48.893 "reset": true, 00:32:48.893 "nvme_admin": false, 00:32:48.893 "nvme_io": false, 00:32:48.893 "nvme_io_md": false, 00:32:48.893 "write_zeroes": true, 00:32:48.893 "zcopy": false, 00:32:48.893 "get_zone_info": false, 00:32:48.893 "zone_management": false, 00:32:48.893 "zone_append": false, 00:32:48.893 "compare": false, 00:32:48.893 "compare_and_write": false, 00:32:48.893 "abort": false, 00:32:48.893 "seek_hole": true, 00:32:48.893 "seek_data": true, 00:32:48.893 "copy": false, 00:32:48.893 "nvme_iov_md": false 00:32:48.893 }, 00:32:48.893 "driver_specific": { 00:32:48.893 "lvol": { 00:32:48.893 "lvol_store_uuid": "12c14c6f-cdf9-42af-8116-0e68eee60646", 00:32:48.893 "base_bdev": "nvme0n1", 00:32:48.893 "thin_provision": true, 00:32:48.893 "num_allocated_clusters": 0, 00:32:48.893 "snapshot": false, 00:32:48.893 "clone": false, 00:32:48.893 "esnap_clone": false 00:32:48.893 } 00:32:48.893 } 00:32:48.893 } 00:32:48.893 ]' 00:32:48.893 12:44:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:48.893 12:44:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:32:48.894 12:44:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:48.894 12:44:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:48.894 12:44:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:48.894 12:44:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:32:48.894 12:44:55 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:32:48.894 12:44:55 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:32:49.154 12:44:56 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:32:49.155 12:44:56 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 7943e7bd-4b11-4fed-a520-cc80f9cf1fe0 00:32:49.155 12:44:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=7943e7bd-4b11-4fed-a520-cc80f9cf1fe0 00:32:49.155 12:44:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:49.155 12:44:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:32:49.155 12:44:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:32:49.155 12:44:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7943e7bd-4b11-4fed-a520-cc80f9cf1fe0 00:32:49.155 12:44:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:49.155 { 00:32:49.155 "name": "7943e7bd-4b11-4fed-a520-cc80f9cf1fe0", 00:32:49.155 "aliases": [ 00:32:49.155 "lvs/nvme0n1p0" 00:32:49.155 ], 00:32:49.155 "product_name": "Logical Volume", 00:32:49.155 "block_size": 4096, 00:32:49.155 "num_blocks": 26476544, 00:32:49.155 "uuid": "7943e7bd-4b11-4fed-a520-cc80f9cf1fe0", 00:32:49.155 "assigned_rate_limits": { 00:32:49.155 "rw_ios_per_sec": 0, 00:32:49.155 "rw_mbytes_per_sec": 0, 00:32:49.155 "r_mbytes_per_sec": 0, 00:32:49.155 "w_mbytes_per_sec": 0 00:32:49.155 }, 00:32:49.155 "claimed": false, 00:32:49.155 "zoned": false, 00:32:49.155 "supported_io_types": { 00:32:49.155 "read": true, 00:32:49.155 "write": true, 00:32:49.155 "unmap": true, 00:32:49.155 "flush": false, 00:32:49.155 "reset": true, 00:32:49.155 "nvme_admin": false, 00:32:49.155 "nvme_io": false, 00:32:49.155 "nvme_io_md": false, 00:32:49.155 "write_zeroes": true, 00:32:49.155 "zcopy": false, 00:32:49.155 "get_zone_info": false, 00:32:49.155 "zone_management": false, 00:32:49.155 "zone_append": false, 00:32:49.155 "compare": false, 00:32:49.155 "compare_and_write": false, 00:32:49.155 "abort": false, 00:32:49.155 "seek_hole": true, 00:32:49.155 "seek_data": true, 00:32:49.155 "copy": false, 00:32:49.155 "nvme_iov_md": false 00:32:49.155 }, 00:32:49.155 "driver_specific": { 00:32:49.155 "lvol": { 00:32:49.155 "lvol_store_uuid": "12c14c6f-cdf9-42af-8116-0e68eee60646", 00:32:49.155 "base_bdev": "nvme0n1", 00:32:49.155 "thin_provision": true, 00:32:49.155 "num_allocated_clusters": 0, 00:32:49.155 "snapshot": false, 00:32:49.155 "clone": false, 00:32:49.155 "esnap_clone": false 00:32:49.155 } 00:32:49.155 } 00:32:49.155 } 00:32:49.155 ]' 00:32:49.155 12:44:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:49.417 12:44:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:32:49.417 12:44:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:49.417 12:44:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:49.417 12:44:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:49.417 12:44:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:32:49.417 12:44:56 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:32:49.417 12:44:56 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 7943e7bd-4b11-4fed-a520-cc80f9cf1fe0 --l2p_dram_limit 10' 00:32:49.417 12:44:56 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:32:49.417 12:44:56 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:32:49.417 12:44:56 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:32:49.417 12:44:56 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:32:49.417 12:44:56 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:32:49.417 12:44:56 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7943e7bd-4b11-4fed-a520-cc80f9cf1fe0 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:32:49.417 [2024-12-16 12:44:56.441831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.417 [2024-12-16 12:44:56.441996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:49.417 [2024-12-16 12:44:56.442049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:49.417 [2024-12-16 12:44:56.442069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.417 [2024-12-16 12:44:56.442131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.417 [2024-12-16 12:44:56.442151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:49.417 [2024-12-16 12:44:56.442186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:32:49.417 [2024-12-16 12:44:56.442206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.417 [2024-12-16 12:44:56.442238] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:49.417 [2024-12-16 12:44:56.442868] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:49.417 [2024-12-16 12:44:56.442954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.417 [2024-12-16 12:44:56.442993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:49.417 [2024-12-16 12:44:56.443016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:32:49.417 [2024-12-16 12:44:56.443031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.417 [2024-12-16 12:44:56.443091] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID eabbca9e-a16b-4a5a-9afb-819f4331b49c 00:32:49.417 [2024-12-16 12:44:56.444414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.417 [2024-12-16 12:44:56.444503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:32:49.417 [2024-12-16 12:44:56.444516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:32:49.417 [2024-12-16 12:44:56.444527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.417 [2024-12-16 12:44:56.451375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.417 [2024-12-16 12:44:56.451459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:49.417 [2024-12-16 12:44:56.451497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.813 ms 00:32:49.417 [2024-12-16 12:44:56.451515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.417 [2024-12-16 12:44:56.451596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.417 [2024-12-16 12:44:56.451618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:49.417 [2024-12-16 12:44:56.451633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:32:49.417 [2024-12-16 12:44:56.451652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.417 [2024-12-16 12:44:56.451710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.417 [2024-12-16 12:44:56.451732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:49.417 [2024-12-16 12:44:56.451749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:49.417 [2024-12-16 12:44:56.451796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.417 [2024-12-16 12:44:56.451825] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:49.417 [2024-12-16 12:44:56.455105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.417 [2024-12-16 12:44:56.455199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:49.417 [2024-12-16 12:44:56.455245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.282 ms 00:32:49.417 [2024-12-16 12:44:56.455262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.417 [2024-12-16 12:44:56.455302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.417 [2024-12-16 12:44:56.455318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:49.417 [2024-12-16 12:44:56.455335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:49.418 [2024-12-16 12:44:56.455354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.418 [2024-12-16 12:44:56.455385] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:32:49.418 [2024-12-16 12:44:56.455511] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:49.418 [2024-12-16 12:44:56.455571] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:49.418 [2024-12-16 12:44:56.455622] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:49.418 [2024-12-16 12:44:56.455697] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:49.418 [2024-12-16 12:44:56.455723] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:49.418 [2024-12-16 12:44:56.455808] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:49.418 [2024-12-16 12:44:56.455825] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:49.418 [2024-12-16 12:44:56.455845] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:49.418 [2024-12-16 12:44:56.455861] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:49.418 [2024-12-16 12:44:56.455897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.418 [2024-12-16 12:44:56.455920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:49.418 [2024-12-16 12:44:56.455938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:32:49.418 [2024-12-16 12:44:56.455952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.418 [2024-12-16 12:44:56.456031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.418 [2024-12-16 12:44:56.456049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:49.418 [2024-12-16 12:44:56.456099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:32:49.418 [2024-12-16 12:44:56.456117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.418 [2024-12-16 12:44:56.456257] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:49.418 [2024-12-16 12:44:56.456302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:49.418 [2024-12-16 12:44:56.456323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:49.418 [2024-12-16 12:44:56.456338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:49.418 [2024-12-16 12:44:56.456356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:49.418 [2024-12-16 12:44:56.456389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:49.418 [2024-12-16 12:44:56.456408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:49.418 [2024-12-16 12:44:56.456423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:49.418 [2024-12-16 12:44:56.456439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:49.418 [2024-12-16 12:44:56.456469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:49.418 [2024-12-16 12:44:56.456486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:49.418 [2024-12-16 12:44:56.456501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:49.418 [2024-12-16 12:44:56.456519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:49.418 [2024-12-16 12:44:56.456533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:49.418 [2024-12-16 12:44:56.456549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:49.418 [2024-12-16 12:44:56.456562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:49.418 [2024-12-16 12:44:56.456580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:49.418 [2024-12-16 12:44:56.456594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:49.418 [2024-12-16 12:44:56.456610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:49.418 [2024-12-16 12:44:56.456624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:49.418 [2024-12-16 12:44:56.456640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:49.418 [2024-12-16 12:44:56.456655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:49.418 [2024-12-16 12:44:56.456723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:49.418 [2024-12-16 12:44:56.456742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:49.418 [2024-12-16 12:44:56.456758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:49.418 [2024-12-16 12:44:56.456773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:49.418 [2024-12-16 12:44:56.456788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:49.418 [2024-12-16 12:44:56.456832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:49.418 [2024-12-16 12:44:56.456849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:49.418 [2024-12-16 12:44:56.456864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:49.418 [2024-12-16 12:44:56.456881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:49.418 [2024-12-16 12:44:56.456895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:49.418 [2024-12-16 12:44:56.456913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:49.418 [2024-12-16 12:44:56.456947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:49.418 [2024-12-16 12:44:56.456965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:49.418 [2024-12-16 12:44:56.456980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:49.418 [2024-12-16 12:44:56.456995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:49.418 [2024-12-16 12:44:56.457010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:49.418 [2024-12-16 12:44:56.457044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:49.418 [2024-12-16 12:44:56.457061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:49.418 [2024-12-16 12:44:56.457077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:49.418 [2024-12-16 12:44:56.457117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:49.418 [2024-12-16 12:44:56.457151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:49.418 [2024-12-16 12:44:56.457188] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:49.418 [2024-12-16 12:44:56.457205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:49.418 [2024-12-16 12:44:56.457252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:49.418 [2024-12-16 12:44:56.457272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:49.418 [2024-12-16 12:44:56.457298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:49.418 [2024-12-16 12:44:56.457316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:49.418 [2024-12-16 12:44:56.457340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:49.418 [2024-12-16 12:44:56.457356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:49.418 [2024-12-16 12:44:56.457385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:49.418 [2024-12-16 12:44:56.457402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:49.418 [2024-12-16 12:44:56.457421] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:49.418 [2024-12-16 12:44:56.457447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:49.418 [2024-12-16 12:44:56.457473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:49.418 [2024-12-16 12:44:56.457515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:49.418 [2024-12-16 12:44:56.457544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:49.418 [2024-12-16 12:44:56.457567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:49.418 [2024-12-16 12:44:56.457589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:49.418 [2024-12-16 12:44:56.457613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:49.418 [2024-12-16 12:44:56.457680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:49.418 [2024-12-16 12:44:56.457706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:49.418 [2024-12-16 12:44:56.457729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:49.418 [2024-12-16 12:44:56.457756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:49.418 [2024-12-16 12:44:56.457797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:49.418 [2024-12-16 12:44:56.457807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:49.418 [2024-12-16 12:44:56.457813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:49.418 [2024-12-16 12:44:56.457820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:49.418 [2024-12-16 12:44:56.457826] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:49.418 [2024-12-16 12:44:56.457835] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:49.418 [2024-12-16 12:44:56.457842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:49.418 [2024-12-16 12:44:56.457853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:49.418 [2024-12-16 12:44:56.457860] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:49.418 [2024-12-16 12:44:56.457868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:49.418 [2024-12-16 12:44:56.457874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.418 [2024-12-16 12:44:56.457881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:49.418 [2024-12-16 12:44:56.457887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.673 ms 00:32:49.418 [2024-12-16 12:44:56.457895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.418 [2024-12-16 12:44:56.457928] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:32:49.418 [2024-12-16 12:44:56.457940] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:32:53.626 [2024-12-16 12:45:00.017877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.626 [2024-12-16 12:45:00.018014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:32:53.626 [2024-12-16 12:45:00.018166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3559.937 ms 00:32:53.626 [2024-12-16 12:45:00.018194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.626 [2024-12-16 12:45:00.042141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.626 [2024-12-16 12:45:00.042301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:53.626 [2024-12-16 12:45:00.042360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.755 ms 00:32:53.626 [2024-12-16 12:45:00.042382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.626 [2024-12-16 12:45:00.042497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.626 [2024-12-16 12:45:00.042521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:53.626 [2024-12-16 12:45:00.042540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:32:53.626 [2024-12-16 12:45:00.042560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.626 [2024-12-16 12:45:00.069631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.626 [2024-12-16 12:45:00.069752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:53.626 [2024-12-16 12:45:00.069800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.026 ms 00:32:53.626 [2024-12-16 12:45:00.069820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.626 [2024-12-16 12:45:00.069864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.626 [2024-12-16 12:45:00.069883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:53.626 [2024-12-16 12:45:00.069899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:53.626 [2024-12-16 12:45:00.069921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.626 [2024-12-16 12:45:00.070376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.626 [2024-12-16 12:45:00.070462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:53.626 [2024-12-16 12:45:00.070508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:32:53.626 [2024-12-16 12:45:00.070528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.626 [2024-12-16 12:45:00.070628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.626 [2024-12-16 12:45:00.070649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:53.626 [2024-12-16 12:45:00.070666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:32:53.626 [2024-12-16 12:45:00.070684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.626 [2024-12-16 12:45:00.083812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.626 [2024-12-16 12:45:00.083912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:53.626 [2024-12-16 12:45:00.083996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.105 ms 00:32:53.626 [2024-12-16 12:45:00.084017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.626 [2024-12-16 12:45:00.110330] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:53.626 [2024-12-16 12:45:00.113445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.626 [2024-12-16 12:45:00.113536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:53.626 [2024-12-16 12:45:00.113580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.349 ms 00:32:53.626 [2024-12-16 12:45:00.113598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.626 [2024-12-16 12:45:00.191588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.626 [2024-12-16 12:45:00.191690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:32:53.626 [2024-12-16 12:45:00.191735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.949 ms 00:32:53.626 [2024-12-16 12:45:00.191754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.626 [2024-12-16 12:45:00.191912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.626 [2024-12-16 12:45:00.191973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:53.626 [2024-12-16 12:45:00.191997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:32:53.626 [2024-12-16 12:45:00.192012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.626 [2024-12-16 12:45:00.210784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.626 [2024-12-16 12:45:00.210809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:32:53.626 [2024-12-16 12:45:00.210820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.711 ms 00:32:53.626 [2024-12-16 12:45:00.210828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.626 [2024-12-16 12:45:00.228385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.626 [2024-12-16 12:45:00.228412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:32:53.626 [2024-12-16 12:45:00.228423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.525 ms 00:32:53.626 [2024-12-16 12:45:00.228429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.626 [2024-12-16 12:45:00.228864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.626 [2024-12-16 12:45:00.228873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:53.626 [2024-12-16 12:45:00.228884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:32:53.626 [2024-12-16 12:45:00.228890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.626 [2024-12-16 12:45:00.294944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.626 [2024-12-16 12:45:00.294973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:32:53.626 [2024-12-16 12:45:00.294986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.026 ms 00:32:53.626 [2024-12-16 12:45:00.294993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.626 [2024-12-16 12:45:00.315345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.626 [2024-12-16 12:45:00.315372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:32:53.626 [2024-12-16 12:45:00.315383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.295 ms 00:32:53.626 [2024-12-16 12:45:00.315390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.626 [2024-12-16 12:45:00.333881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.626 [2024-12-16 12:45:00.333907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:32:53.626 [2024-12-16 12:45:00.333917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.459 ms 00:32:53.626 [2024-12-16 12:45:00.333924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.626 [2024-12-16 12:45:00.353049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.626 [2024-12-16 12:45:00.353163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:53.626 [2024-12-16 12:45:00.353180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.094 ms 00:32:53.626 [2024-12-16 12:45:00.353186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.626 [2024-12-16 12:45:00.353218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.626 [2024-12-16 12:45:00.353226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:53.626 [2024-12-16 12:45:00.353236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:53.626 [2024-12-16 12:45:00.353242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.626 [2024-12-16 12:45:00.353312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.627 [2024-12-16 12:45:00.353322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:53.627 [2024-12-16 12:45:00.353340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:32:53.627 [2024-12-16 12:45:00.353346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.627 [2024-12-16 12:45:00.354558] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3912.317 ms, result 0 00:32:53.627 { 00:32:53.627 "name": "ftl0", 00:32:53.627 "uuid": "eabbca9e-a16b-4a5a-9afb-819f4331b49c" 00:32:53.627 } 00:32:53.627 12:45:00 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:32:53.627 12:45:00 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:32:53.627 12:45:00 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:32:53.627 12:45:00 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:32:53.889 [2024-12-16 12:45:00.769687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.889 [2024-12-16 12:45:00.769809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:53.889 [2024-12-16 12:45:00.769823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:53.889 [2024-12-16 12:45:00.769832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.889 [2024-12-16 12:45:00.769853] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:53.889 [2024-12-16 12:45:00.772112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.889 [2024-12-16 12:45:00.772136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:53.889 [2024-12-16 12:45:00.772148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.244 ms 00:32:53.889 [2024-12-16 12:45:00.772163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.889 [2024-12-16 12:45:00.772385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.889 [2024-12-16 12:45:00.772395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:53.889 [2024-12-16 12:45:00.772404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:32:53.889 [2024-12-16 12:45:00.772410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.890 [2024-12-16 12:45:00.774841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.890 [2024-12-16 12:45:00.774858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:53.890 [2024-12-16 12:45:00.774868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.418 ms 00:32:53.890 [2024-12-16 12:45:00.774875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.890 [2024-12-16 12:45:00.779452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.890 [2024-12-16 12:45:00.779476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:53.890 [2024-12-16 12:45:00.779485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.561 ms 00:32:53.890 [2024-12-16 12:45:00.779492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.890 [2024-12-16 12:45:00.797481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.890 [2024-12-16 12:45:00.797507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:53.890 [2024-12-16 12:45:00.797516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.935 ms 00:32:53.890 [2024-12-16 12:45:00.797522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.890 [2024-12-16 12:45:00.810622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.890 [2024-12-16 12:45:00.810738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:53.890 [2024-12-16 12:45:00.810756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.066 ms 00:32:53.890 [2024-12-16 12:45:00.810762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.890 [2024-12-16 12:45:00.810878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.890 [2024-12-16 12:45:00.810886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:53.890 [2024-12-16 12:45:00.810894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:32:53.890 [2024-12-16 12:45:00.810903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.890 [2024-12-16 12:45:00.829272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.890 [2024-12-16 12:45:00.829374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:53.890 [2024-12-16 12:45:00.829390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.353 ms 00:32:53.890 [2024-12-16 12:45:00.829396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.890 [2024-12-16 12:45:00.847549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.890 [2024-12-16 12:45:00.847573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:53.890 [2024-12-16 12:45:00.847584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.125 ms 00:32:53.890 [2024-12-16 12:45:00.847589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.890 [2024-12-16 12:45:00.865338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.890 [2024-12-16 12:45:00.865362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:53.890 [2024-12-16 12:45:00.865372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.716 ms 00:32:53.890 [2024-12-16 12:45:00.865377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.890 [2024-12-16 12:45:00.882878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.890 [2024-12-16 12:45:00.882903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:53.890 [2024-12-16 12:45:00.882912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.445 ms 00:32:53.890 [2024-12-16 12:45:00.882918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.890 [2024-12-16 12:45:00.882947] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:53.890 [2024-12-16 12:45:00.882958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.882969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.882976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.882983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.882989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.882997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:53.890 [2024-12-16 12:45:00.883289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:53.891 [2024-12-16 12:45:00.883679] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:53.891 [2024-12-16 12:45:00.883687] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eabbca9e-a16b-4a5a-9afb-819f4331b49c 00:32:53.891 [2024-12-16 12:45:00.883693] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:53.891 [2024-12-16 12:45:00.883704] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:53.891 [2024-12-16 12:45:00.883710] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:53.891 [2024-12-16 12:45:00.883717] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:53.891 [2024-12-16 12:45:00.883723] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:53.891 [2024-12-16 12:45:00.883730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:53.891 [2024-12-16 12:45:00.883736] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:53.891 [2024-12-16 12:45:00.883742] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:53.891 [2024-12-16 12:45:00.883747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:53.891 [2024-12-16 12:45:00.883754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.891 [2024-12-16 12:45:00.883759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:53.891 [2024-12-16 12:45:00.883767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:32:53.891 [2024-12-16 12:45:00.883774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.891 [2024-12-16 12:45:00.893486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.891 [2024-12-16 12:45:00.893510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:53.891 [2024-12-16 12:45:00.893520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.686 ms 00:32:53.891 [2024-12-16 12:45:00.893526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.891 [2024-12-16 12:45:00.893806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.891 [2024-12-16 12:45:00.893816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:53.891 [2024-12-16 12:45:00.893824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:32:53.891 [2024-12-16 12:45:00.893830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.891 [2024-12-16 12:45:00.928797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.891 [2024-12-16 12:45:00.928913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:53.891 [2024-12-16 12:45:00.928929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.891 [2024-12-16 12:45:00.928935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.891 [2024-12-16 12:45:00.928986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.891 [2024-12-16 12:45:00.928994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:53.891 [2024-12-16 12:45:00.929002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.891 [2024-12-16 12:45:00.929008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.891 [2024-12-16 12:45:00.929070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.891 [2024-12-16 12:45:00.929079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:53.891 [2024-12-16 12:45:00.929088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.891 [2024-12-16 12:45:00.929094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.892 [2024-12-16 12:45:00.929111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.892 [2024-12-16 12:45:00.929117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:53.892 [2024-12-16 12:45:00.929127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.892 [2024-12-16 12:45:00.929133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.892 [2024-12-16 12:45:00.991765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:54.154 [2024-12-16 12:45:00.991901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:54.154 [2024-12-16 12:45:00.991918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:54.154 [2024-12-16 12:45:00.991924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:54.154 [2024-12-16 12:45:01.042824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:54.154 [2024-12-16 12:45:01.042966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:54.154 [2024-12-16 12:45:01.042985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:54.154 [2024-12-16 12:45:01.042993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:54.154 [2024-12-16 12:45:01.043087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:54.154 [2024-12-16 12:45:01.043095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:54.154 [2024-12-16 12:45:01.043103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:54.154 [2024-12-16 12:45:01.043109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:54.154 [2024-12-16 12:45:01.043150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:54.154 [2024-12-16 12:45:01.043178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:54.154 [2024-12-16 12:45:01.043187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:54.154 [2024-12-16 12:45:01.043193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:54.154 [2024-12-16 12:45:01.043279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:54.154 [2024-12-16 12:45:01.043288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:54.154 [2024-12-16 12:45:01.043297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:54.154 [2024-12-16 12:45:01.043303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:54.154 [2024-12-16 12:45:01.043332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:54.154 [2024-12-16 12:45:01.043340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:54.154 [2024-12-16 12:45:01.043348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:54.154 [2024-12-16 12:45:01.043355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:54.154 [2024-12-16 12:45:01.043395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:54.154 [2024-12-16 12:45:01.043402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:54.154 [2024-12-16 12:45:01.043410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:54.154 [2024-12-16 12:45:01.043416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:54.154 [2024-12-16 12:45:01.043459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:54.154 [2024-12-16 12:45:01.043467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:54.154 [2024-12-16 12:45:01.043476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:54.154 [2024-12-16 12:45:01.043482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:54.154 [2024-12-16 12:45:01.043604] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 273.870 ms, result 0 00:32:54.154 true 00:32:54.154 12:45:01 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 86873 00:32:54.154 12:45:01 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 86873 ']' 00:32:54.154 12:45:01 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 86873 00:32:54.154 12:45:01 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:32:54.154 12:45:01 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:54.154 12:45:01 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86873 00:32:54.154 killing process with pid 86873 00:32:54.154 12:45:01 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:54.154 12:45:01 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:54.154 12:45:01 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86873' 00:32:54.154 12:45:01 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 86873 00:32:54.154 12:45:01 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 86873 00:33:00.735 12:45:06 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:33:04.072 262144+0 records in 00:33:04.072 262144+0 records out 00:33:04.072 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.88397 s, 276 MB/s 00:33:04.072 12:45:10 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:05.465 12:45:12 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:05.465 [2024-12-16 12:45:12.505348] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:33:05.465 [2024-12-16 12:45:12.505452] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87094 ] 00:33:05.724 [2024-12-16 12:45:12.652191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.724 [2024-12-16 12:45:12.743487] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:05.983 [2024-12-16 12:45:12.976294] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:05.983 [2024-12-16 12:45:12.976363] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:06.243 [2024-12-16 12:45:13.132119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.243 [2024-12-16 12:45:13.132171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:06.243 [2024-12-16 12:45:13.132184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:06.243 [2024-12-16 12:45:13.132190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.243 [2024-12-16 12:45:13.132233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.243 [2024-12-16 12:45:13.132243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:06.244 [2024-12-16 12:45:13.132250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:33:06.244 [2024-12-16 12:45:13.132256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.244 [2024-12-16 12:45:13.132270] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:06.244 [2024-12-16 12:45:13.132819] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:06.244 [2024-12-16 12:45:13.132833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.244 [2024-12-16 12:45:13.132839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:06.244 [2024-12-16 12:45:13.132846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:33:06.244 [2024-12-16 12:45:13.132852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.244 [2024-12-16 12:45:13.134130] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:06.244 [2024-12-16 12:45:13.144338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.244 [2024-12-16 12:45:13.144366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:06.244 [2024-12-16 12:45:13.144376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.209 ms 00:33:06.244 [2024-12-16 12:45:13.144383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.244 [2024-12-16 12:45:13.144433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.244 [2024-12-16 12:45:13.144441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:06.244 [2024-12-16 12:45:13.144448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:33:06.244 [2024-12-16 12:45:13.144453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.244 [2024-12-16 12:45:13.150879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.244 [2024-12-16 12:45:13.150903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:06.244 [2024-12-16 12:45:13.150911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.385 ms 00:33:06.244 [2024-12-16 12:45:13.150921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.244 [2024-12-16 12:45:13.150978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.244 [2024-12-16 12:45:13.150985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:06.244 [2024-12-16 12:45:13.150991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:33:06.244 [2024-12-16 12:45:13.150997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.244 [2024-12-16 12:45:13.151036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.244 [2024-12-16 12:45:13.151044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:06.244 [2024-12-16 12:45:13.151050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:06.244 [2024-12-16 12:45:13.151057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.244 [2024-12-16 12:45:13.151074] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:06.244 [2024-12-16 12:45:13.154110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.244 [2024-12-16 12:45:13.154133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:06.244 [2024-12-16 12:45:13.154144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.040 ms 00:33:06.244 [2024-12-16 12:45:13.154151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.244 [2024-12-16 12:45:13.154193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.244 [2024-12-16 12:45:13.154201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:06.244 [2024-12-16 12:45:13.154208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:06.244 [2024-12-16 12:45:13.154214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.244 [2024-12-16 12:45:13.154228] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:06.244 [2024-12-16 12:45:13.154246] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:06.244 [2024-12-16 12:45:13.154275] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:06.244 [2024-12-16 12:45:13.154290] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:06.244 [2024-12-16 12:45:13.154372] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:06.244 [2024-12-16 12:45:13.154381] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:06.244 [2024-12-16 12:45:13.154391] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:06.244 [2024-12-16 12:45:13.154398] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:06.244 [2024-12-16 12:45:13.154405] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:06.244 [2024-12-16 12:45:13.154412] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:06.244 [2024-12-16 12:45:13.154418] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:06.244 [2024-12-16 12:45:13.154424] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:06.244 [2024-12-16 12:45:13.154433] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:06.244 [2024-12-16 12:45:13.154439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.244 [2024-12-16 12:45:13.154445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:06.244 [2024-12-16 12:45:13.154451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:33:06.244 [2024-12-16 12:45:13.154458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.244 [2024-12-16 12:45:13.154521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.244 [2024-12-16 12:45:13.154528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:06.244 [2024-12-16 12:45:13.154534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:33:06.244 [2024-12-16 12:45:13.154539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.244 [2024-12-16 12:45:13.154613] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:06.244 [2024-12-16 12:45:13.154621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:06.244 [2024-12-16 12:45:13.154628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:06.244 [2024-12-16 12:45:13.154634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:06.244 [2024-12-16 12:45:13.154640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:06.244 [2024-12-16 12:45:13.154646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:06.244 [2024-12-16 12:45:13.154651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:06.244 [2024-12-16 12:45:13.154657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:06.244 [2024-12-16 12:45:13.154663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:06.244 [2024-12-16 12:45:13.154668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:06.244 [2024-12-16 12:45:13.154674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:06.244 [2024-12-16 12:45:13.154680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:06.244 [2024-12-16 12:45:13.154687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:06.244 [2024-12-16 12:45:13.154698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:06.244 [2024-12-16 12:45:13.154704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:06.244 [2024-12-16 12:45:13.154709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:06.244 [2024-12-16 12:45:13.154714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:06.244 [2024-12-16 12:45:13.154719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:06.244 [2024-12-16 12:45:13.154724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:06.244 [2024-12-16 12:45:13.154730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:06.244 [2024-12-16 12:45:13.154735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:06.244 [2024-12-16 12:45:13.154740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:06.244 [2024-12-16 12:45:13.154745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:06.244 [2024-12-16 12:45:13.154750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:06.244 [2024-12-16 12:45:13.154754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:06.244 [2024-12-16 12:45:13.154760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:06.244 [2024-12-16 12:45:13.154765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:06.244 [2024-12-16 12:45:13.154770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:06.244 [2024-12-16 12:45:13.154775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:06.244 [2024-12-16 12:45:13.154780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:06.244 [2024-12-16 12:45:13.154785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:06.244 [2024-12-16 12:45:13.154790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:06.244 [2024-12-16 12:45:13.154795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:06.244 [2024-12-16 12:45:13.154800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:06.244 [2024-12-16 12:45:13.154805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:06.244 [2024-12-16 12:45:13.154810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:06.244 [2024-12-16 12:45:13.154815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:06.244 [2024-12-16 12:45:13.154820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:06.244 [2024-12-16 12:45:13.154825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:06.244 [2024-12-16 12:45:13.154830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:06.244 [2024-12-16 12:45:13.154835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:06.244 [2024-12-16 12:45:13.154840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:06.244 [2024-12-16 12:45:13.154844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:06.244 [2024-12-16 12:45:13.154850] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:06.244 [2024-12-16 12:45:13.154859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:06.244 [2024-12-16 12:45:13.154866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:06.244 [2024-12-16 12:45:13.154871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:06.244 [2024-12-16 12:45:13.154877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:06.245 [2024-12-16 12:45:13.154882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:06.245 [2024-12-16 12:45:13.154887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:06.245 [2024-12-16 12:45:13.154892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:06.245 [2024-12-16 12:45:13.154897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:06.245 [2024-12-16 12:45:13.154903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:06.245 [2024-12-16 12:45:13.154909] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:06.245 [2024-12-16 12:45:13.154916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:06.245 [2024-12-16 12:45:13.154925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:06.245 [2024-12-16 12:45:13.154930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:06.245 [2024-12-16 12:45:13.154936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:06.245 [2024-12-16 12:45:13.154941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:06.245 [2024-12-16 12:45:13.154946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:06.245 [2024-12-16 12:45:13.154952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:06.245 [2024-12-16 12:45:13.154957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:06.245 [2024-12-16 12:45:13.154964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:06.245 [2024-12-16 12:45:13.154969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:06.245 [2024-12-16 12:45:13.154975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:06.245 [2024-12-16 12:45:13.154980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:06.245 [2024-12-16 12:45:13.154985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:06.245 [2024-12-16 12:45:13.154990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:06.245 [2024-12-16 12:45:13.154996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:06.245 [2024-12-16 12:45:13.155001] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:06.245 [2024-12-16 12:45:13.155008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:06.245 [2024-12-16 12:45:13.155015] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:06.245 [2024-12-16 12:45:13.155021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:06.245 [2024-12-16 12:45:13.155026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:06.245 [2024-12-16 12:45:13.155032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:06.245 [2024-12-16 12:45:13.155037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.245 [2024-12-16 12:45:13.155044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:06.245 [2024-12-16 12:45:13.155050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:33:06.245 [2024-12-16 12:45:13.155058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.245 [2024-12-16 12:45:13.179428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.245 [2024-12-16 12:45:13.179460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:06.245 [2024-12-16 12:45:13.179469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.329 ms 00:33:06.245 [2024-12-16 12:45:13.179479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.245 [2024-12-16 12:45:13.179548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.245 [2024-12-16 12:45:13.179555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:06.245 [2024-12-16 12:45:13.179562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:33:06.245 [2024-12-16 12:45:13.179568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.245 [2024-12-16 12:45:13.227212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.245 [2024-12-16 12:45:13.227245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:06.245 [2024-12-16 12:45:13.227255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.600 ms 00:33:06.245 [2024-12-16 12:45:13.227262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.245 [2024-12-16 12:45:13.227296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.245 [2024-12-16 12:45:13.227305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:06.245 [2024-12-16 12:45:13.227314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:06.245 [2024-12-16 12:45:13.227320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.245 [2024-12-16 12:45:13.227751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.245 [2024-12-16 12:45:13.227772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:06.245 [2024-12-16 12:45:13.227780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:33:06.245 [2024-12-16 12:45:13.227786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.245 [2024-12-16 12:45:13.227901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.245 [2024-12-16 12:45:13.227915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:06.245 [2024-12-16 12:45:13.227924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:33:06.245 [2024-12-16 12:45:13.227932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.245 [2024-12-16 12:45:13.239885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.245 [2024-12-16 12:45:13.240056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:06.245 [2024-12-16 12:45:13.240069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.937 ms 00:33:06.245 [2024-12-16 12:45:13.240075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.245 [2024-12-16 12:45:13.250697] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:33:06.245 [2024-12-16 12:45:13.250802] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:06.245 [2024-12-16 12:45:13.250816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.245 [2024-12-16 12:45:13.250823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:06.245 [2024-12-16 12:45:13.250830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.647 ms 00:33:06.245 [2024-12-16 12:45:13.250835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.245 [2024-12-16 12:45:13.269667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.245 [2024-12-16 12:45:13.269772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:06.245 [2024-12-16 12:45:13.269785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.804 ms 00:33:06.245 [2024-12-16 12:45:13.269792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.245 [2024-12-16 12:45:13.279227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.245 [2024-12-16 12:45:13.279253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:06.245 [2024-12-16 12:45:13.279261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.406 ms 00:33:06.245 [2024-12-16 12:45:13.279267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.245 [2024-12-16 12:45:13.288253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.245 [2024-12-16 12:45:13.288361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:06.245 [2024-12-16 12:45:13.288373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.959 ms 00:33:06.245 [2024-12-16 12:45:13.288380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.245 [2024-12-16 12:45:13.288838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.245 [2024-12-16 12:45:13.288850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:06.245 [2024-12-16 12:45:13.288857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:33:06.245 [2024-12-16 12:45:13.288866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.245 [2024-12-16 12:45:13.338004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.245 [2024-12-16 12:45:13.338035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:06.245 [2024-12-16 12:45:13.338045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.125 ms 00:33:06.245 [2024-12-16 12:45:13.338055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.245 [2024-12-16 12:45:13.346177] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:06.507 [2024-12-16 12:45:13.348509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.507 [2024-12-16 12:45:13.348533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:06.507 [2024-12-16 12:45:13.348542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.410 ms 00:33:06.507 [2024-12-16 12:45:13.348550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.507 [2024-12-16 12:45:13.348618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.507 [2024-12-16 12:45:13.348628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:06.507 [2024-12-16 12:45:13.348636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:06.507 [2024-12-16 12:45:13.348642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.507 [2024-12-16 12:45:13.348699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.507 [2024-12-16 12:45:13.348708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:06.507 [2024-12-16 12:45:13.348715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:33:06.507 [2024-12-16 12:45:13.348721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.507 [2024-12-16 12:45:13.348736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.507 [2024-12-16 12:45:13.348742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:06.507 [2024-12-16 12:45:13.348750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:06.507 [2024-12-16 12:45:13.348756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.507 [2024-12-16 12:45:13.348785] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:06.507 [2024-12-16 12:45:13.348795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.507 [2024-12-16 12:45:13.348801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:06.507 [2024-12-16 12:45:13.348809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:06.507 [2024-12-16 12:45:13.348816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.507 [2024-12-16 12:45:13.367216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.507 [2024-12-16 12:45:13.367338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:06.507 [2024-12-16 12:45:13.367352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.387 ms 00:33:06.507 [2024-12-16 12:45:13.367363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.507 [2024-12-16 12:45:13.367419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.507 [2024-12-16 12:45:13.367427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:06.507 [2024-12-16 12:45:13.367434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:33:06.507 [2024-12-16 12:45:13.367440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.507 [2024-12-16 12:45:13.368345] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 235.821 ms, result 0 00:33:07.452  [2024-12-16T12:45:15.502Z] Copying: 17/1024 [MB] (17 MBps) [2024-12-16T12:45:16.442Z] Copying: 29/1024 [MB] (12 MBps) [2024-12-16T12:45:17.387Z] Copying: 51/1024 [MB] (22 MBps) [2024-12-16T12:45:18.774Z] Copying: 62/1024 [MB] (10 MBps) [2024-12-16T12:45:19.718Z] Copying: 74/1024 [MB] (11 MBps) [2024-12-16T12:45:20.664Z] Copying: 85/1024 [MB] (11 MBps) [2024-12-16T12:45:21.608Z] Copying: 95/1024 [MB] (10 MBps) [2024-12-16T12:45:22.551Z] Copying: 106/1024 [MB] (10 MBps) [2024-12-16T12:45:23.496Z] Copying: 117/1024 [MB] (11 MBps) [2024-12-16T12:45:24.440Z] Copying: 129/1024 [MB] (11 MBps) [2024-12-16T12:45:25.381Z] Copying: 140/1024 [MB] (11 MBps) [2024-12-16T12:45:26.763Z] Copying: 151/1024 [MB] (11 MBps) [2024-12-16T12:45:27.708Z] Copying: 162/1024 [MB] (11 MBps) [2024-12-16T12:45:28.652Z] Copying: 173/1024 [MB] (11 MBps) [2024-12-16T12:45:29.595Z] Copying: 184/1024 [MB] (11 MBps) [2024-12-16T12:45:30.540Z] Copying: 195/1024 [MB] (11 MBps) [2024-12-16T12:45:31.483Z] Copying: 206/1024 [MB] (11 MBps) [2024-12-16T12:45:32.426Z] Copying: 218/1024 [MB] (11 MBps) [2024-12-16T12:45:33.814Z] Copying: 229/1024 [MB] (11 MBps) [2024-12-16T12:45:34.388Z] Copying: 240/1024 [MB] (11 MBps) [2024-12-16T12:45:35.774Z] Copying: 251/1024 [MB] (10 MBps) [2024-12-16T12:45:36.717Z] Copying: 262/1024 [MB] (11 MBps) [2024-12-16T12:45:37.660Z] Copying: 272/1024 [MB] (10 MBps) [2024-12-16T12:45:38.604Z] Copying: 283/1024 [MB] (10 MBps) [2024-12-16T12:45:39.550Z] Copying: 295/1024 [MB] (11 MBps) [2024-12-16T12:45:40.553Z] Copying: 306/1024 [MB] (11 MBps) [2024-12-16T12:45:41.496Z] Copying: 316/1024 [MB] (10 MBps) [2024-12-16T12:45:42.440Z] Copying: 327/1024 [MB] (11 MBps) [2024-12-16T12:45:43.382Z] Copying: 339/1024 [MB] (11 MBps) [2024-12-16T12:45:44.770Z] Copying: 350/1024 [MB] (11 MBps) [2024-12-16T12:45:45.714Z] Copying: 362/1024 [MB] (11 MBps) [2024-12-16T12:45:46.658Z] Copying: 372/1024 [MB] (10 MBps) [2024-12-16T12:45:47.603Z] Copying: 383/1024 [MB] (11 MBps) [2024-12-16T12:45:48.547Z] Copying: 394/1024 [MB] (11 MBps) [2024-12-16T12:45:49.491Z] Copying: 405/1024 [MB] (11 MBps) [2024-12-16T12:45:50.437Z] Copying: 416/1024 [MB] (10 MBps) [2024-12-16T12:45:51.383Z] Copying: 436360/1048576 [kB] (10016 kBps) [2024-12-16T12:45:52.771Z] Copying: 436/1024 [MB] (10 MBps) [2024-12-16T12:45:53.717Z] Copying: 448/1024 [MB] (11 MBps) [2024-12-16T12:45:54.662Z] Copying: 459/1024 [MB] (10 MBps) [2024-12-16T12:45:55.605Z] Copying: 470/1024 [MB] (11 MBps) [2024-12-16T12:45:56.549Z] Copying: 481/1024 [MB] (10 MBps) [2024-12-16T12:45:57.498Z] Copying: 492/1024 [MB] (11 MBps) [2024-12-16T12:45:58.442Z] Copying: 504/1024 [MB] (11 MBps) [2024-12-16T12:45:59.383Z] Copying: 514/1024 [MB] (10 MBps) [2024-12-16T12:46:00.772Z] Copying: 525/1024 [MB] (11 MBps) [2024-12-16T12:46:01.717Z] Copying: 537/1024 [MB] (11 MBps) [2024-12-16T12:46:02.660Z] Copying: 548/1024 [MB] (11 MBps) [2024-12-16T12:46:03.604Z] Copying: 560/1024 [MB] (11 MBps) [2024-12-16T12:46:04.548Z] Copying: 572/1024 [MB] (11 MBps) [2024-12-16T12:46:05.492Z] Copying: 584/1024 [MB] (11 MBps) [2024-12-16T12:46:06.437Z] Copying: 595/1024 [MB] (11 MBps) [2024-12-16T12:46:07.382Z] Copying: 606/1024 [MB] (10 MBps) [2024-12-16T12:46:08.770Z] Copying: 619/1024 [MB] (12 MBps) [2024-12-16T12:46:09.404Z] Copying: 630/1024 [MB] (11 MBps) [2024-12-16T12:46:10.794Z] Copying: 642/1024 [MB] (12 MBps) [2024-12-16T12:46:11.739Z] Copying: 653/1024 [MB] (10 MBps) [2024-12-16T12:46:12.684Z] Copying: 664/1024 [MB] (11 MBps) [2024-12-16T12:46:13.628Z] Copying: 675/1024 [MB] (10 MBps) [2024-12-16T12:46:14.573Z] Copying: 685/1024 [MB] (10 MBps) [2024-12-16T12:46:15.515Z] Copying: 697/1024 [MB] (11 MBps) [2024-12-16T12:46:16.460Z] Copying: 708/1024 [MB] (11 MBps) [2024-12-16T12:46:17.404Z] Copying: 721/1024 [MB] (13 MBps) [2024-12-16T12:46:18.792Z] Copying: 734/1024 [MB] (13 MBps) [2024-12-16T12:46:19.737Z] Copying: 745/1024 [MB] (11 MBps) [2024-12-16T12:46:20.681Z] Copying: 757/1024 [MB] (11 MBps) [2024-12-16T12:46:21.626Z] Copying: 770/1024 [MB] (12 MBps) [2024-12-16T12:46:22.571Z] Copying: 781/1024 [MB] (11 MBps) [2024-12-16T12:46:23.514Z] Copying: 792/1024 [MB] (10 MBps) [2024-12-16T12:46:24.459Z] Copying: 806/1024 [MB] (13 MBps) [2024-12-16T12:46:25.407Z] Copying: 817/1024 [MB] (11 MBps) [2024-12-16T12:46:26.796Z] Copying: 829/1024 [MB] (12 MBps) [2024-12-16T12:46:27.741Z] Copying: 843/1024 [MB] (13 MBps) [2024-12-16T12:46:28.686Z] Copying: 854/1024 [MB] (11 MBps) [2024-12-16T12:46:29.631Z] Copying: 865/1024 [MB] (11 MBps) [2024-12-16T12:46:30.574Z] Copying: 877/1024 [MB] (11 MBps) [2024-12-16T12:46:31.518Z] Copying: 888/1024 [MB] (11 MBps) [2024-12-16T12:46:32.462Z] Copying: 899/1024 [MB] (10 MBps) [2024-12-16T12:46:33.407Z] Copying: 910/1024 [MB] (11 MBps) [2024-12-16T12:46:34.796Z] Copying: 921/1024 [MB] (11 MBps) [2024-12-16T12:46:35.740Z] Copying: 932/1024 [MB] (11 MBps) [2024-12-16T12:46:36.685Z] Copying: 944/1024 [MB] (11 MBps) [2024-12-16T12:46:37.629Z] Copying: 955/1024 [MB] (11 MBps) [2024-12-16T12:46:38.642Z] Copying: 967/1024 [MB] (11 MBps) [2024-12-16T12:46:39.584Z] Copying: 978/1024 [MB] (11 MBps) [2024-12-16T12:46:40.528Z] Copying: 989/1024 [MB] (11 MBps) [2024-12-16T12:46:41.470Z] Copying: 1000/1024 [MB] (11 MBps) [2024-12-16T12:46:42.413Z] Copying: 1011/1024 [MB] (11 MBps) [2024-12-16T12:46:42.675Z] Copying: 1023/1024 [MB] (11 MBps) [2024-12-16T12:46:42.675Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-12-16 12:46:42.445688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.569 [2024-12-16 12:46:42.445737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:35.569 [2024-12-16 12:46:42.445749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:35.569 [2024-12-16 12:46:42.445757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.569 [2024-12-16 12:46:42.445773] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:35.569 [2024-12-16 12:46:42.448042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.569 [2024-12-16 12:46:42.448065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:35.569 [2024-12-16 12:46:42.448074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.257 ms 00:34:35.569 [2024-12-16 12:46:42.448085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.569 [2024-12-16 12:46:42.450329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.569 [2024-12-16 12:46:42.450351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:35.569 [2024-12-16 12:46:42.450359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.227 ms 00:34:35.569 [2024-12-16 12:46:42.450366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.569 [2024-12-16 12:46:42.450386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.569 [2024-12-16 12:46:42.450394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:35.569 [2024-12-16 12:46:42.450401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:35.569 [2024-12-16 12:46:42.450407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.569 [2024-12-16 12:46:42.450451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.569 [2024-12-16 12:46:42.450459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:35.569 [2024-12-16 12:46:42.450465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:34:35.569 [2024-12-16 12:46:42.450470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.569 [2024-12-16 12:46:42.450480] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:35.569 [2024-12-16 12:46:42.450490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:35.569 [2024-12-16 12:46:42.450695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.450994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.451000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.451006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.451011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.451017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.451022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.451028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.451033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.451039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.451044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.451050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.451057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.451062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.451067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.451073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:35.570 [2024-12-16 12:46:42.451085] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:35.570 [2024-12-16 12:46:42.451091] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eabbca9e-a16b-4a5a-9afb-819f4331b49c 00:34:35.570 [2024-12-16 12:46:42.451097] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:35.570 [2024-12-16 12:46:42.451103] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:34:35.570 [2024-12-16 12:46:42.451108] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:35.570 [2024-12-16 12:46:42.451116] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:35.570 [2024-12-16 12:46:42.451121] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:35.570 [2024-12-16 12:46:42.451126] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:35.570 [2024-12-16 12:46:42.451134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:35.570 [2024-12-16 12:46:42.451140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:35.570 [2024-12-16 12:46:42.451145] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:35.570 [2024-12-16 12:46:42.451149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.570 [2024-12-16 12:46:42.451166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:35.570 [2024-12-16 12:46:42.451173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:34:35.570 [2024-12-16 12:46:42.451180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.570 [2024-12-16 12:46:42.461414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.570 [2024-12-16 12:46:42.461542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:35.570 [2024-12-16 12:46:42.461555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.223 ms 00:34:35.570 [2024-12-16 12:46:42.461561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.570 [2024-12-16 12:46:42.461851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.570 [2024-12-16 12:46:42.461859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:35.570 [2024-12-16 12:46:42.461865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:34:35.570 [2024-12-16 12:46:42.461872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.570 [2024-12-16 12:46:42.489498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:35.570 [2024-12-16 12:46:42.489526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:35.570 [2024-12-16 12:46:42.489534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:35.570 [2024-12-16 12:46:42.489540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.570 [2024-12-16 12:46:42.489587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:35.570 [2024-12-16 12:46:42.489593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:35.570 [2024-12-16 12:46:42.489600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:35.571 [2024-12-16 12:46:42.489606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.571 [2024-12-16 12:46:42.489641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:35.571 [2024-12-16 12:46:42.489653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:35.571 [2024-12-16 12:46:42.489659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:35.571 [2024-12-16 12:46:42.489665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.571 [2024-12-16 12:46:42.489677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:35.571 [2024-12-16 12:46:42.489682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:35.571 [2024-12-16 12:46:42.489692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:35.571 [2024-12-16 12:46:42.489697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.571 [2024-12-16 12:46:42.553750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:35.571 [2024-12-16 12:46:42.553791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:35.571 [2024-12-16 12:46:42.553800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:35.571 [2024-12-16 12:46:42.553807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.571 [2024-12-16 12:46:42.605304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:35.571 [2024-12-16 12:46:42.605351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:35.571 [2024-12-16 12:46:42.605360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:35.571 [2024-12-16 12:46:42.605367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.571 [2024-12-16 12:46:42.605435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:35.571 [2024-12-16 12:46:42.605443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:35.571 [2024-12-16 12:46:42.605454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:35.571 [2024-12-16 12:46:42.605460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.571 [2024-12-16 12:46:42.605489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:35.571 [2024-12-16 12:46:42.605496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:35.571 [2024-12-16 12:46:42.605502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:35.571 [2024-12-16 12:46:42.605508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.571 [2024-12-16 12:46:42.605574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:35.571 [2024-12-16 12:46:42.605582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:35.571 [2024-12-16 12:46:42.605595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:35.571 [2024-12-16 12:46:42.605603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.571 [2024-12-16 12:46:42.605624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:35.571 [2024-12-16 12:46:42.605631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:35.571 [2024-12-16 12:46:42.605638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:35.571 [2024-12-16 12:46:42.605644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.571 [2024-12-16 12:46:42.605676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:35.571 [2024-12-16 12:46:42.605682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:35.571 [2024-12-16 12:46:42.605689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:35.571 [2024-12-16 12:46:42.605697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.571 [2024-12-16 12:46:42.605735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:35.571 [2024-12-16 12:46:42.605744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:35.571 [2024-12-16 12:46:42.605750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:35.571 [2024-12-16 12:46:42.605756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.571 [2024-12-16 12:46:42.605864] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 160.149 ms, result 0 00:34:36.513 00:34:36.513 00:34:36.513 12:46:43 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:34:36.513 [2024-12-16 12:46:43.377204] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:34:36.513 [2024-12-16 12:46:43.377323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87997 ] 00:34:36.513 [2024-12-16 12:46:43.531984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:36.773 [2024-12-16 12:46:43.632926] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:36.773 [2024-12-16 12:46:43.865427] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:36.773 [2024-12-16 12:46:43.865486] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:37.035 [2024-12-16 12:46:44.020956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.035 [2024-12-16 12:46:44.020997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:37.035 [2024-12-16 12:46:44.021009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:37.035 [2024-12-16 12:46:44.021016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.035 [2024-12-16 12:46:44.021057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.035 [2024-12-16 12:46:44.021067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:37.035 [2024-12-16 12:46:44.021074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:34:37.035 [2024-12-16 12:46:44.021080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.035 [2024-12-16 12:46:44.021093] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:37.035 [2024-12-16 12:46:44.021672] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:37.035 [2024-12-16 12:46:44.021691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.035 [2024-12-16 12:46:44.021697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:37.035 [2024-12-16 12:46:44.021704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:34:37.035 [2024-12-16 12:46:44.021710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.035 [2024-12-16 12:46:44.021914] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:34:37.035 [2024-12-16 12:46:44.021934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.035 [2024-12-16 12:46:44.021943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:37.035 [2024-12-16 12:46:44.021951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:34:37.035 [2024-12-16 12:46:44.021958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.035 [2024-12-16 12:46:44.022000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.035 [2024-12-16 12:46:44.022007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:37.035 [2024-12-16 12:46:44.022014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:34:37.035 [2024-12-16 12:46:44.022020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.035 [2024-12-16 12:46:44.022259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.035 [2024-12-16 12:46:44.022269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:37.035 [2024-12-16 12:46:44.022276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:34:37.035 [2024-12-16 12:46:44.022281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.035 [2024-12-16 12:46:44.022333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.035 [2024-12-16 12:46:44.022340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:37.035 [2024-12-16 12:46:44.022346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:34:37.035 [2024-12-16 12:46:44.022352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.035 [2024-12-16 12:46:44.022368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.035 [2024-12-16 12:46:44.022375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:37.035 [2024-12-16 12:46:44.022383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:37.035 [2024-12-16 12:46:44.022389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.035 [2024-12-16 12:46:44.022403] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:37.035 [2024-12-16 12:46:44.025639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.035 [2024-12-16 12:46:44.025663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:37.035 [2024-12-16 12:46:44.025671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.238 ms 00:34:37.035 [2024-12-16 12:46:44.025677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.035 [2024-12-16 12:46:44.025708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.035 [2024-12-16 12:46:44.025715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:37.035 [2024-12-16 12:46:44.025721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:37.035 [2024-12-16 12:46:44.025727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.035 [2024-12-16 12:46:44.025760] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:37.035 [2024-12-16 12:46:44.025778] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:37.035 [2024-12-16 12:46:44.025807] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:37.035 [2024-12-16 12:46:44.025818] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:37.035 [2024-12-16 12:46:44.025902] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:37.035 [2024-12-16 12:46:44.025910] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:37.035 [2024-12-16 12:46:44.025918] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:37.035 [2024-12-16 12:46:44.025925] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:37.035 [2024-12-16 12:46:44.025932] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:37.035 [2024-12-16 12:46:44.025940] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:37.035 [2024-12-16 12:46:44.025946] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:37.035 [2024-12-16 12:46:44.025951] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:37.035 [2024-12-16 12:46:44.025957] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:37.035 [2024-12-16 12:46:44.025963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.035 [2024-12-16 12:46:44.025969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:37.035 [2024-12-16 12:46:44.025975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:34:37.035 [2024-12-16 12:46:44.025981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.035 [2024-12-16 12:46:44.026043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.035 [2024-12-16 12:46:44.026050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:37.035 [2024-12-16 12:46:44.026056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:34:37.035 [2024-12-16 12:46:44.026063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.035 [2024-12-16 12:46:44.026135] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:37.035 [2024-12-16 12:46:44.026143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:37.035 [2024-12-16 12:46:44.026150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:37.035 [2024-12-16 12:46:44.026170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:37.035 [2024-12-16 12:46:44.026177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:37.035 [2024-12-16 12:46:44.026184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:37.035 [2024-12-16 12:46:44.026189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:37.035 [2024-12-16 12:46:44.026197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:37.035 [2024-12-16 12:46:44.026203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:37.035 [2024-12-16 12:46:44.026209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:37.035 [2024-12-16 12:46:44.026215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:37.035 [2024-12-16 12:46:44.026220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:37.035 [2024-12-16 12:46:44.026225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:37.036 [2024-12-16 12:46:44.026230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:37.036 [2024-12-16 12:46:44.026235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:37.036 [2024-12-16 12:46:44.026245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:37.036 [2024-12-16 12:46:44.026250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:37.036 [2024-12-16 12:46:44.026255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:37.036 [2024-12-16 12:46:44.026260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:37.036 [2024-12-16 12:46:44.026266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:37.036 [2024-12-16 12:46:44.026271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:37.036 [2024-12-16 12:46:44.026275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:37.036 [2024-12-16 12:46:44.026281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:37.036 [2024-12-16 12:46:44.026286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:37.036 [2024-12-16 12:46:44.026291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:37.036 [2024-12-16 12:46:44.026296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:37.036 [2024-12-16 12:46:44.026301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:37.036 [2024-12-16 12:46:44.026306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:37.036 [2024-12-16 12:46:44.026311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:37.036 [2024-12-16 12:46:44.026316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:37.036 [2024-12-16 12:46:44.026321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:37.036 [2024-12-16 12:46:44.026327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:37.036 [2024-12-16 12:46:44.026332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:37.036 [2024-12-16 12:46:44.026337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:37.036 [2024-12-16 12:46:44.026342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:37.036 [2024-12-16 12:46:44.026347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:37.036 [2024-12-16 12:46:44.026352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:37.036 [2024-12-16 12:46:44.026357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:37.036 [2024-12-16 12:46:44.026362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:37.036 [2024-12-16 12:46:44.026368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:37.036 [2024-12-16 12:46:44.026374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:37.036 [2024-12-16 12:46:44.026379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:37.036 [2024-12-16 12:46:44.026385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:37.036 [2024-12-16 12:46:44.026390] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:37.036 [2024-12-16 12:46:44.026405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:37.036 [2024-12-16 12:46:44.026411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:37.036 [2024-12-16 12:46:44.026416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:37.036 [2024-12-16 12:46:44.026425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:37.036 [2024-12-16 12:46:44.026431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:37.036 [2024-12-16 12:46:44.026436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:37.036 [2024-12-16 12:46:44.026441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:37.036 [2024-12-16 12:46:44.026447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:37.036 [2024-12-16 12:46:44.026452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:37.036 [2024-12-16 12:46:44.026459] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:37.036 [2024-12-16 12:46:44.026466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:37.036 [2024-12-16 12:46:44.026472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:37.036 [2024-12-16 12:46:44.026478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:37.036 [2024-12-16 12:46:44.026483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:37.036 [2024-12-16 12:46:44.026488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:37.036 [2024-12-16 12:46:44.026493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:37.036 [2024-12-16 12:46:44.026498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:37.036 [2024-12-16 12:46:44.026504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:37.036 [2024-12-16 12:46:44.026512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:37.036 [2024-12-16 12:46:44.026517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:37.036 [2024-12-16 12:46:44.026522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:37.036 [2024-12-16 12:46:44.026527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:37.036 [2024-12-16 12:46:44.026533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:37.036 [2024-12-16 12:46:44.026539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:37.036 [2024-12-16 12:46:44.026544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:37.036 [2024-12-16 12:46:44.026549] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:37.036 [2024-12-16 12:46:44.026556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:37.036 [2024-12-16 12:46:44.026564] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:37.036 [2024-12-16 12:46:44.026570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:37.036 [2024-12-16 12:46:44.026576] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:37.036 [2024-12-16 12:46:44.026581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:37.036 [2024-12-16 12:46:44.026587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.036 [2024-12-16 12:46:44.026592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:37.036 [2024-12-16 12:46:44.026598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:34:37.036 [2024-12-16 12:46:44.026604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.036 [2024-12-16 12:46:44.047743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.036 [2024-12-16 12:46:44.047770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:37.036 [2024-12-16 12:46:44.047779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.107 ms 00:34:37.036 [2024-12-16 12:46:44.047785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.036 [2024-12-16 12:46:44.047846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.036 [2024-12-16 12:46:44.047852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:37.036 [2024-12-16 12:46:44.047861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:34:37.036 [2024-12-16 12:46:44.047867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.036 [2024-12-16 12:46:44.090519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.036 [2024-12-16 12:46:44.090552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:37.036 [2024-12-16 12:46:44.090562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.613 ms 00:34:37.036 [2024-12-16 12:46:44.090569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.036 [2024-12-16 12:46:44.090602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.036 [2024-12-16 12:46:44.090611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:37.036 [2024-12-16 12:46:44.090618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:37.036 [2024-12-16 12:46:44.090624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.036 [2024-12-16 12:46:44.090700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.036 [2024-12-16 12:46:44.090709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:37.036 [2024-12-16 12:46:44.090716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:34:37.036 [2024-12-16 12:46:44.090722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.036 [2024-12-16 12:46:44.090818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.036 [2024-12-16 12:46:44.090827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:37.036 [2024-12-16 12:46:44.090833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:34:37.036 [2024-12-16 12:46:44.090839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.036 [2024-12-16 12:46:44.102768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.036 [2024-12-16 12:46:44.102796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:37.037 [2024-12-16 12:46:44.102804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.915 ms 00:34:37.037 [2024-12-16 12:46:44.102811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.037 [2024-12-16 12:46:44.102902] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:37.037 [2024-12-16 12:46:44.102912] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:37.037 [2024-12-16 12:46:44.102919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.037 [2024-12-16 12:46:44.102928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:37.037 [2024-12-16 12:46:44.102934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:34:37.037 [2024-12-16 12:46:44.102940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.037 [2024-12-16 12:46:44.112070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.037 [2024-12-16 12:46:44.112093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:37.037 [2024-12-16 12:46:44.112102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.119 ms 00:34:37.037 [2024-12-16 12:46:44.112108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.037 [2024-12-16 12:46:44.112214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.037 [2024-12-16 12:46:44.112223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:37.037 [2024-12-16 12:46:44.112229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:34:37.037 [2024-12-16 12:46:44.112239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.037 [2024-12-16 12:46:44.112280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.037 [2024-12-16 12:46:44.112288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:37.037 [2024-12-16 12:46:44.112300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:34:37.037 [2024-12-16 12:46:44.112306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.037 [2024-12-16 12:46:44.112735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.037 [2024-12-16 12:46:44.112749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:37.037 [2024-12-16 12:46:44.112756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:34:37.037 [2024-12-16 12:46:44.112762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.037 [2024-12-16 12:46:44.112776] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:34:37.037 [2024-12-16 12:46:44.112784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.037 [2024-12-16 12:46:44.112791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:37.037 [2024-12-16 12:46:44.112797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:37.037 [2024-12-16 12:46:44.112802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.037 [2024-12-16 12:46:44.122335] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:37.037 [2024-12-16 12:46:44.122443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.037 [2024-12-16 12:46:44.122450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:37.037 [2024-12-16 12:46:44.122457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.627 ms 00:34:37.037 [2024-12-16 12:46:44.122464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.037 [2024-12-16 12:46:44.124019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.037 [2024-12-16 12:46:44.124193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:37.037 [2024-12-16 12:46:44.124205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.540 ms 00:34:37.037 [2024-12-16 12:46:44.124212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.037 [2024-12-16 12:46:44.124299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.037 [2024-12-16 12:46:44.124308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:37.037 [2024-12-16 12:46:44.124314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:34:37.037 [2024-12-16 12:46:44.124321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.037 [2024-12-16 12:46:44.124338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.037 [2024-12-16 12:46:44.124349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:37.037 [2024-12-16 12:46:44.124355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:37.037 [2024-12-16 12:46:44.124362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.037 [2024-12-16 12:46:44.124386] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:37.037 [2024-12-16 12:46:44.124395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.037 [2024-12-16 12:46:44.124401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:37.037 [2024-12-16 12:46:44.124408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:37.037 [2024-12-16 12:46:44.124414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.299 [2024-12-16 12:46:44.144070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.299 [2024-12-16 12:46:44.144192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:37.299 [2024-12-16 12:46:44.144206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.640 ms 00:34:37.299 [2024-12-16 12:46:44.144213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.299 [2024-12-16 12:46:44.144267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:37.299 [2024-12-16 12:46:44.144275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:37.299 [2024-12-16 12:46:44.144282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:34:37.299 [2024-12-16 12:46:44.144288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:37.299 [2024-12-16 12:46:44.145109] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 123.808 ms, result 0 00:34:38.241  [2024-12-16T12:46:46.729Z] Copying: 11/1024 [MB] (11 MBps) [2024-12-16T12:46:47.302Z] Copying: 23/1024 [MB] (11 MBps) [2024-12-16T12:46:48.687Z] Copying: 35/1024 [MB] (11 MBps) [2024-12-16T12:46:49.632Z] Copying: 45/1024 [MB] (10 MBps) [2024-12-16T12:46:50.583Z] Copying: 56/1024 [MB] (10 MBps) [2024-12-16T12:46:51.526Z] Copying: 67/1024 [MB] (10 MBps) [2024-12-16T12:46:52.471Z] Copying: 78/1024 [MB] (11 MBps) [2024-12-16T12:46:53.415Z] Copying: 89/1024 [MB] (11 MBps) [2024-12-16T12:46:54.360Z] Copying: 101/1024 [MB] (11 MBps) [2024-12-16T12:46:55.306Z] Copying: 112/1024 [MB] (11 MBps) [2024-12-16T12:46:56.692Z] Copying: 123/1024 [MB] (10 MBps) [2024-12-16T12:46:57.635Z] Copying: 134/1024 [MB] (11 MBps) [2024-12-16T12:46:58.579Z] Copying: 146/1024 [MB] (11 MBps) [2024-12-16T12:46:59.525Z] Copying: 157/1024 [MB] (10 MBps) [2024-12-16T12:47:00.469Z] Copying: 169/1024 [MB] (11 MBps) [2024-12-16T12:47:01.412Z] Copying: 181/1024 [MB] (11 MBps) [2024-12-16T12:47:02.358Z] Copying: 192/1024 [MB] (11 MBps) [2024-12-16T12:47:03.300Z] Copying: 204/1024 [MB] (11 MBps) [2024-12-16T12:47:04.688Z] Copying: 215/1024 [MB] (11 MBps) [2024-12-16T12:47:05.632Z] Copying: 227/1024 [MB] (11 MBps) [2024-12-16T12:47:06.634Z] Copying: 238/1024 [MB] (10 MBps) [2024-12-16T12:47:07.611Z] Copying: 250/1024 [MB] (11 MBps) [2024-12-16T12:47:08.555Z] Copying: 261/1024 [MB] (11 MBps) [2024-12-16T12:47:09.500Z] Copying: 272/1024 [MB] (10 MBps) [2024-12-16T12:47:10.445Z] Copying: 284/1024 [MB] (11 MBps) [2024-12-16T12:47:11.389Z] Copying: 295/1024 [MB] (11 MBps) [2024-12-16T12:47:12.333Z] Copying: 305/1024 [MB] (10 MBps) [2024-12-16T12:47:13.723Z] Copying: 317/1024 [MB] (11 MBps) [2024-12-16T12:47:14.296Z] Copying: 328/1024 [MB] (10 MBps) [2024-12-16T12:47:15.684Z] Copying: 340/1024 [MB] (11 MBps) [2024-12-16T12:47:16.628Z] Copying: 351/1024 [MB] (11 MBps) [2024-12-16T12:47:17.570Z] Copying: 373/1024 [MB] (22 MBps) [2024-12-16T12:47:18.512Z] Copying: 384/1024 [MB] (11 MBps) [2024-12-16T12:47:19.455Z] Copying: 396/1024 [MB] (11 MBps) [2024-12-16T12:47:20.399Z] Copying: 408/1024 [MB] (11 MBps) [2024-12-16T12:47:21.342Z] Copying: 420/1024 [MB] (12 MBps) [2024-12-16T12:47:22.730Z] Copying: 431/1024 [MB] (11 MBps) [2024-12-16T12:47:23.304Z] Copying: 443/1024 [MB] (11 MBps) [2024-12-16T12:47:24.688Z] Copying: 454/1024 [MB] (11 MBps) [2024-12-16T12:47:25.632Z] Copying: 465/1024 [MB] (11 MBps) [2024-12-16T12:47:26.576Z] Copying: 477/1024 [MB] (11 MBps) [2024-12-16T12:47:27.526Z] Copying: 489/1024 [MB] (11 MBps) [2024-12-16T12:47:28.471Z] Copying: 500/1024 [MB] (11 MBps) [2024-12-16T12:47:29.416Z] Copying: 512/1024 [MB] (12 MBps) [2024-12-16T12:47:30.361Z] Copying: 524/1024 [MB] (11 MBps) [2024-12-16T12:47:31.303Z] Copying: 535/1024 [MB] (11 MBps) [2024-12-16T12:47:32.691Z] Copying: 546/1024 [MB] (11 MBps) [2024-12-16T12:47:33.636Z] Copying: 557/1024 [MB] (10 MBps) [2024-12-16T12:47:34.582Z] Copying: 568/1024 [MB] (11 MBps) [2024-12-16T12:47:35.597Z] Copying: 580/1024 [MB] (11 MBps) [2024-12-16T12:47:36.541Z] Copying: 592/1024 [MB] (11 MBps) [2024-12-16T12:47:37.486Z] Copying: 603/1024 [MB] (11 MBps) [2024-12-16T12:47:38.432Z] Copying: 615/1024 [MB] (11 MBps) [2024-12-16T12:47:39.377Z] Copying: 627/1024 [MB] (11 MBps) [2024-12-16T12:47:40.324Z] Copying: 639/1024 [MB] (11 MBps) [2024-12-16T12:47:41.710Z] Copying: 651/1024 [MB] (12 MBps) [2024-12-16T12:47:42.656Z] Copying: 662/1024 [MB] (11 MBps) [2024-12-16T12:47:43.601Z] Copying: 674/1024 [MB] (11 MBps) [2024-12-16T12:47:44.544Z] Copying: 685/1024 [MB] (11 MBps) [2024-12-16T12:47:45.489Z] Copying: 697/1024 [MB] (11 MBps) [2024-12-16T12:47:46.433Z] Copying: 708/1024 [MB] (11 MBps) [2024-12-16T12:47:47.378Z] Copying: 720/1024 [MB] (11 MBps) [2024-12-16T12:47:48.323Z] Copying: 732/1024 [MB] (11 MBps) [2024-12-16T12:47:49.712Z] Copying: 744/1024 [MB] (11 MBps) [2024-12-16T12:47:50.658Z] Copying: 754/1024 [MB] (10 MBps) [2024-12-16T12:47:51.602Z] Copying: 766/1024 [MB] (11 MBps) [2024-12-16T12:47:52.548Z] Copying: 778/1024 [MB] (12 MBps) [2024-12-16T12:47:53.494Z] Copying: 789/1024 [MB] (11 MBps) [2024-12-16T12:47:54.439Z] Copying: 810/1024 [MB] (21 MBps) [2024-12-16T12:47:55.386Z] Copying: 821/1024 [MB] (10 MBps) [2024-12-16T12:47:56.330Z] Copying: 832/1024 [MB] (10 MBps) [2024-12-16T12:47:57.718Z] Copying: 844/1024 [MB] (11 MBps) [2024-12-16T12:47:58.290Z] Copying: 855/1024 [MB] (11 MBps) [2024-12-16T12:47:59.678Z] Copying: 867/1024 [MB] (11 MBps) [2024-12-16T12:48:00.621Z] Copying: 878/1024 [MB] (11 MBps) [2024-12-16T12:48:01.565Z] Copying: 889/1024 [MB] (11 MBps) [2024-12-16T12:48:02.509Z] Copying: 901/1024 [MB] (11 MBps) [2024-12-16T12:48:03.454Z] Copying: 913/1024 [MB] (11 MBps) [2024-12-16T12:48:04.420Z] Copying: 926/1024 [MB] (13 MBps) [2024-12-16T12:48:05.370Z] Copying: 938/1024 [MB] (11 MBps) [2024-12-16T12:48:06.312Z] Copying: 948/1024 [MB] (10 MBps) [2024-12-16T12:48:07.699Z] Copying: 960/1024 [MB] (11 MBps) [2024-12-16T12:48:08.643Z] Copying: 972/1024 [MB] (11 MBps) [2024-12-16T12:48:09.590Z] Copying: 983/1024 [MB] (11 MBps) [2024-12-16T12:48:10.533Z] Copying: 994/1024 [MB] (10 MBps) [2024-12-16T12:48:11.477Z] Copying: 1005/1024 [MB] (11 MBps) [2024-12-16T12:48:12.050Z] Copying: 1016/1024 [MB] (10 MBps) [2024-12-16T12:48:12.311Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-12-16 12:48:12.162098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:05.205 [2024-12-16 12:48:12.162228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:05.205 [2024-12-16 12:48:12.162257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:36:05.205 [2024-12-16 12:48:12.162274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:05.205 [2024-12-16 12:48:12.162324] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:05.205 [2024-12-16 12:48:12.168499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:05.205 [2024-12-16 12:48:12.168568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:05.206 [2024-12-16 12:48:12.168590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.147 ms 00:36:05.206 [2024-12-16 12:48:12.168605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:05.206 [2024-12-16 12:48:12.169043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:05.206 [2024-12-16 12:48:12.169064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:05.206 [2024-12-16 12:48:12.169081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:36:05.206 [2024-12-16 12:48:12.169097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:05.206 [2024-12-16 12:48:12.169177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:05.206 [2024-12-16 12:48:12.169198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:36:05.206 [2024-12-16 12:48:12.169214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:36:05.206 [2024-12-16 12:48:12.169229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:05.206 [2024-12-16 12:48:12.169318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:05.206 [2024-12-16 12:48:12.169351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:36:05.206 [2024-12-16 12:48:12.169367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:36:05.206 [2024-12-16 12:48:12.169382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:05.206 [2024-12-16 12:48:12.169408] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:05.206 [2024-12-16 12:48:12.169431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.169995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:05.206 [2024-12-16 12:48:12.170902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:05.207 [2024-12-16 12:48:12.170917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:05.207 [2024-12-16 12:48:12.170933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:05.207 [2024-12-16 12:48:12.170948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:05.207 [2024-12-16 12:48:12.170964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:05.207 [2024-12-16 12:48:12.170986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:05.207 [2024-12-16 12:48:12.171001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:05.207 [2024-12-16 12:48:12.171017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:05.207 [2024-12-16 12:48:12.171033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:05.207 [2024-12-16 12:48:12.171050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:05.207 [2024-12-16 12:48:12.171066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:05.207 [2024-12-16 12:48:12.171081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:05.207 [2024-12-16 12:48:12.171098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:05.207 [2024-12-16 12:48:12.171114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:05.207 [2024-12-16 12:48:12.171130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:05.207 [2024-12-16 12:48:12.171180] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:05.207 [2024-12-16 12:48:12.171198] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eabbca9e-a16b-4a5a-9afb-819f4331b49c 00:36:05.207 [2024-12-16 12:48:12.171214] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:36:05.207 [2024-12-16 12:48:12.171229] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:36:05.207 [2024-12-16 12:48:12.171244] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:36:05.207 [2024-12-16 12:48:12.171261] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:36:05.207 [2024-12-16 12:48:12.171276] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:05.207 [2024-12-16 12:48:12.171292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:05.207 [2024-12-16 12:48:12.171306] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:05.207 [2024-12-16 12:48:12.171319] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:05.207 [2024-12-16 12:48:12.171333] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:05.207 [2024-12-16 12:48:12.171348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:05.207 [2024-12-16 12:48:12.171364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:05.207 [2024-12-16 12:48:12.171380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.941 ms 00:36:05.207 [2024-12-16 12:48:12.171398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:05.207 [2024-12-16 12:48:12.182456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:05.207 [2024-12-16 12:48:12.182565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:05.207 [2024-12-16 12:48:12.182606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.028 ms 00:36:05.207 [2024-12-16 12:48:12.182624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:05.207 [2024-12-16 12:48:12.182935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:05.207 [2024-12-16 12:48:12.182963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:05.207 [2024-12-16 12:48:12.183015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:36:05.207 [2024-12-16 12:48:12.183033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:05.207 [2024-12-16 12:48:12.210904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:05.207 [2024-12-16 12:48:12.211000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:05.207 [2024-12-16 12:48:12.211039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:05.207 [2024-12-16 12:48:12.211057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:05.207 [2024-12-16 12:48:12.211119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:05.207 [2024-12-16 12:48:12.211135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:05.207 [2024-12-16 12:48:12.211164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:05.207 [2024-12-16 12:48:12.211180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:05.207 [2024-12-16 12:48:12.211228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:05.207 [2024-12-16 12:48:12.211247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:05.207 [2024-12-16 12:48:12.211262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:05.207 [2024-12-16 12:48:12.211303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:05.207 [2024-12-16 12:48:12.211328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:05.207 [2024-12-16 12:48:12.211344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:05.207 [2024-12-16 12:48:12.211358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:05.207 [2024-12-16 12:48:12.211408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:05.207 [2024-12-16 12:48:12.274080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:05.207 [2024-12-16 12:48:12.274231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:05.207 [2024-12-16 12:48:12.274274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:05.207 [2024-12-16 12:48:12.274292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:05.468 [2024-12-16 12:48:12.325447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:05.468 [2024-12-16 12:48:12.325575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:05.468 [2024-12-16 12:48:12.325614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:05.468 [2024-12-16 12:48:12.325637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:05.468 [2024-12-16 12:48:12.325714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:05.468 [2024-12-16 12:48:12.325734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:05.468 [2024-12-16 12:48:12.325750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:05.468 [2024-12-16 12:48:12.325764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:05.468 [2024-12-16 12:48:12.325807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:05.468 [2024-12-16 12:48:12.325824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:05.468 [2024-12-16 12:48:12.325841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:05.468 [2024-12-16 12:48:12.325881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:05.468 [2024-12-16 12:48:12.325986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:05.468 [2024-12-16 12:48:12.326008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:05.468 [2024-12-16 12:48:12.326300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:05.468 [2024-12-16 12:48:12.326336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:05.468 [2024-12-16 12:48:12.326393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:05.468 [2024-12-16 12:48:12.326503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:05.468 [2024-12-16 12:48:12.326556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:05.468 [2024-12-16 12:48:12.326574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:05.468 [2024-12-16 12:48:12.326673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:05.468 [2024-12-16 12:48:12.326694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:05.468 [2024-12-16 12:48:12.326710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:05.468 [2024-12-16 12:48:12.326724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:05.468 [2024-12-16 12:48:12.326772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:05.468 [2024-12-16 12:48:12.326824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:05.468 [2024-12-16 12:48:12.326841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:05.468 [2024-12-16 12:48:12.326857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:05.468 [2024-12-16 12:48:12.326984] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 164.887 ms, result 0 00:36:06.039 00:36:06.039 00:36:06.039 12:48:12 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:08.576 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:36:08.576 12:48:15 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:36:08.576 [2024-12-16 12:48:15.104014] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:36:08.576 [2024-12-16 12:48:15.104222] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88911 ] 00:36:08.576 [2024-12-16 12:48:15.252357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:08.576 [2024-12-16 12:48:15.343283] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:08.576 [2024-12-16 12:48:15.576978] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:08.576 [2024-12-16 12:48:15.577206] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:08.838 [2024-12-16 12:48:15.732636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.838 [2024-12-16 12:48:15.732783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:36:08.838 [2024-12-16 12:48:15.732838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:08.838 [2024-12-16 12:48:15.732857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.838 [2024-12-16 12:48:15.732914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.838 [2024-12-16 12:48:15.732937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:08.838 [2024-12-16 12:48:15.732954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:36:08.838 [2024-12-16 12:48:15.732969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.838 [2024-12-16 12:48:15.732995] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:36:08.838 [2024-12-16 12:48:15.733602] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:36:08.838 [2024-12-16 12:48:15.733674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.838 [2024-12-16 12:48:15.733684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:08.838 [2024-12-16 12:48:15.733692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:36:08.838 [2024-12-16 12:48:15.733698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.838 [2024-12-16 12:48:15.733923] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:36:08.838 [2024-12-16 12:48:15.733944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.838 [2024-12-16 12:48:15.733954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:36:08.838 [2024-12-16 12:48:15.733961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:36:08.838 [2024-12-16 12:48:15.733968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.838 [2024-12-16 12:48:15.734007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.838 [2024-12-16 12:48:15.734014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:36:08.838 [2024-12-16 12:48:15.734021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:36:08.838 [2024-12-16 12:48:15.734026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.838 [2024-12-16 12:48:15.734277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.838 [2024-12-16 12:48:15.734287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:08.838 [2024-12-16 12:48:15.734294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:36:08.838 [2024-12-16 12:48:15.734300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.838 [2024-12-16 12:48:15.734350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.838 [2024-12-16 12:48:15.734358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:08.838 [2024-12-16 12:48:15.734364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:36:08.838 [2024-12-16 12:48:15.734370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.838 [2024-12-16 12:48:15.734386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.838 [2024-12-16 12:48:15.734396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:36:08.838 [2024-12-16 12:48:15.734403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:36:08.838 [2024-12-16 12:48:15.734408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.838 [2024-12-16 12:48:15.734422] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:36:08.838 [2024-12-16 12:48:15.737637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.838 [2024-12-16 12:48:15.737663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:08.838 [2024-12-16 12:48:15.737671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.218 ms 00:36:08.838 [2024-12-16 12:48:15.737677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.838 [2024-12-16 12:48:15.737708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.838 [2024-12-16 12:48:15.737715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:36:08.838 [2024-12-16 12:48:15.737721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:36:08.838 [2024-12-16 12:48:15.737727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.838 [2024-12-16 12:48:15.737760] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:36:08.838 [2024-12-16 12:48:15.737781] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:36:08.838 [2024-12-16 12:48:15.737810] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:36:08.838 [2024-12-16 12:48:15.737822] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:36:08.838 [2024-12-16 12:48:15.737902] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:36:08.838 [2024-12-16 12:48:15.737911] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:36:08.838 [2024-12-16 12:48:15.737919] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:36:08.838 [2024-12-16 12:48:15.737927] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:36:08.838 [2024-12-16 12:48:15.737937] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:36:08.838 [2024-12-16 12:48:15.737945] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:36:08.838 [2024-12-16 12:48:15.737951] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:36:08.838 [2024-12-16 12:48:15.737957] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:36:08.838 [2024-12-16 12:48:15.737962] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:36:08.838 [2024-12-16 12:48:15.737969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.838 [2024-12-16 12:48:15.737975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:36:08.838 [2024-12-16 12:48:15.737982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:36:08.838 [2024-12-16 12:48:15.737987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.838 [2024-12-16 12:48:15.738050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.838 [2024-12-16 12:48:15.738057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:36:08.838 [2024-12-16 12:48:15.738064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:36:08.838 [2024-12-16 12:48:15.738070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.838 [2024-12-16 12:48:15.738141] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:36:08.838 [2024-12-16 12:48:15.738149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:36:08.838 [2024-12-16 12:48:15.738170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:08.838 [2024-12-16 12:48:15.738177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:08.838 [2024-12-16 12:48:15.738184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:36:08.838 [2024-12-16 12:48:15.738189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:36:08.838 [2024-12-16 12:48:15.738194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:36:08.838 [2024-12-16 12:48:15.738201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:36:08.838 [2024-12-16 12:48:15.738207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:36:08.839 [2024-12-16 12:48:15.738213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:08.839 [2024-12-16 12:48:15.738219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:36:08.839 [2024-12-16 12:48:15.738225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:36:08.839 [2024-12-16 12:48:15.738230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:08.839 [2024-12-16 12:48:15.738235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:36:08.839 [2024-12-16 12:48:15.738241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:36:08.839 [2024-12-16 12:48:15.738255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:08.839 [2024-12-16 12:48:15.738261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:36:08.839 [2024-12-16 12:48:15.738266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:36:08.839 [2024-12-16 12:48:15.738271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:08.839 [2024-12-16 12:48:15.738276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:36:08.839 [2024-12-16 12:48:15.738282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:36:08.839 [2024-12-16 12:48:15.738287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:08.839 [2024-12-16 12:48:15.738293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:36:08.839 [2024-12-16 12:48:15.738298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:36:08.839 [2024-12-16 12:48:15.738303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:08.839 [2024-12-16 12:48:15.738308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:36:08.839 [2024-12-16 12:48:15.738313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:36:08.839 [2024-12-16 12:48:15.738318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:08.839 [2024-12-16 12:48:15.738322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:36:08.839 [2024-12-16 12:48:15.738327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:36:08.839 [2024-12-16 12:48:15.738333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:08.839 [2024-12-16 12:48:15.738338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:36:08.839 [2024-12-16 12:48:15.738343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:36:08.839 [2024-12-16 12:48:15.738348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:08.839 [2024-12-16 12:48:15.738354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:36:08.839 [2024-12-16 12:48:15.738359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:36:08.839 [2024-12-16 12:48:15.738364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:08.839 [2024-12-16 12:48:15.738369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:36:08.839 [2024-12-16 12:48:15.738375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:36:08.839 [2024-12-16 12:48:15.738380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:08.839 [2024-12-16 12:48:15.738385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:36:08.839 [2024-12-16 12:48:15.738390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:36:08.839 [2024-12-16 12:48:15.738395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:08.839 [2024-12-16 12:48:15.738401] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:36:08.839 [2024-12-16 12:48:15.738407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:36:08.839 [2024-12-16 12:48:15.738413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:08.839 [2024-12-16 12:48:15.738420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:08.839 [2024-12-16 12:48:15.738427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:36:08.839 [2024-12-16 12:48:15.738433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:36:08.839 [2024-12-16 12:48:15.738439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:36:08.839 [2024-12-16 12:48:15.738445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:36:08.839 [2024-12-16 12:48:15.738455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:36:08.839 [2024-12-16 12:48:15.738461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:36:08.839 [2024-12-16 12:48:15.738467] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:36:08.839 [2024-12-16 12:48:15.738475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:08.839 [2024-12-16 12:48:15.738482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:36:08.839 [2024-12-16 12:48:15.738488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:36:08.839 [2024-12-16 12:48:15.738493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:36:08.839 [2024-12-16 12:48:15.738499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:36:08.839 [2024-12-16 12:48:15.738504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:36:08.839 [2024-12-16 12:48:15.738510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:36:08.839 [2024-12-16 12:48:15.738516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:36:08.839 [2024-12-16 12:48:15.738521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:36:08.839 [2024-12-16 12:48:15.738526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:36:08.839 [2024-12-16 12:48:15.738532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:36:08.839 [2024-12-16 12:48:15.738538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:36:08.839 [2024-12-16 12:48:15.738543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:36:08.839 [2024-12-16 12:48:15.738549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:36:08.839 [2024-12-16 12:48:15.738555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:36:08.839 [2024-12-16 12:48:15.738560] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:36:08.839 [2024-12-16 12:48:15.738566] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:08.839 [2024-12-16 12:48:15.738572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:08.839 [2024-12-16 12:48:15.738577] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:36:08.839 [2024-12-16 12:48:15.738583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:36:08.839 [2024-12-16 12:48:15.738589] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:36:08.839 [2024-12-16 12:48:15.738594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.839 [2024-12-16 12:48:15.738600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:36:08.839 [2024-12-16 12:48:15.738605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:36:08.839 [2024-12-16 12:48:15.738611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.839 [2024-12-16 12:48:15.759678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.839 [2024-12-16 12:48:15.759792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:08.839 [2024-12-16 12:48:15.759805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.034 ms 00:36:08.839 [2024-12-16 12:48:15.759812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.839 [2024-12-16 12:48:15.759879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.839 [2024-12-16 12:48:15.759889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:36:08.839 [2024-12-16 12:48:15.759895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:36:08.839 [2024-12-16 12:48:15.759901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.839 [2024-12-16 12:48:15.800332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.839 [2024-12-16 12:48:15.800365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:08.839 [2024-12-16 12:48:15.800375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.391 ms 00:36:08.839 [2024-12-16 12:48:15.800385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.839 [2024-12-16 12:48:15.800416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.839 [2024-12-16 12:48:15.800424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:08.839 [2024-12-16 12:48:15.800431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:36:08.839 [2024-12-16 12:48:15.800437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.839 [2024-12-16 12:48:15.800510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.839 [2024-12-16 12:48:15.800520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:08.839 [2024-12-16 12:48:15.800526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:36:08.839 [2024-12-16 12:48:15.800532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.839 [2024-12-16 12:48:15.800631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.839 [2024-12-16 12:48:15.800638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:08.839 [2024-12-16 12:48:15.800644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:36:08.839 [2024-12-16 12:48:15.800650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.839 [2024-12-16 12:48:15.812603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.839 [2024-12-16 12:48:15.812629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:08.839 [2024-12-16 12:48:15.812637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.938 ms 00:36:08.839 [2024-12-16 12:48:15.812643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.839 [2024-12-16 12:48:15.812735] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:36:08.839 [2024-12-16 12:48:15.812744] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:36:08.839 [2024-12-16 12:48:15.812754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.840 [2024-12-16 12:48:15.812761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:36:08.840 [2024-12-16 12:48:15.812768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:36:08.840 [2024-12-16 12:48:15.812773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.840 [2024-12-16 12:48:15.822033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.840 [2024-12-16 12:48:15.822058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:36:08.840 [2024-12-16 12:48:15.822066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.248 ms 00:36:08.840 [2024-12-16 12:48:15.822073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.840 [2024-12-16 12:48:15.822178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.840 [2024-12-16 12:48:15.822186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:36:08.840 [2024-12-16 12:48:15.822196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:36:08.840 [2024-12-16 12:48:15.822202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.840 [2024-12-16 12:48:15.822226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.840 [2024-12-16 12:48:15.822232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:36:08.840 [2024-12-16 12:48:15.822245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:36:08.840 [2024-12-16 12:48:15.822251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.840 [2024-12-16 12:48:15.822693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.840 [2024-12-16 12:48:15.822706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:36:08.840 [2024-12-16 12:48:15.822714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:36:08.840 [2024-12-16 12:48:15.822723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.840 [2024-12-16 12:48:15.822735] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:36:08.840 [2024-12-16 12:48:15.822742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.840 [2024-12-16 12:48:15.822748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:36:08.840 [2024-12-16 12:48:15.822755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:36:08.840 [2024-12-16 12:48:15.822762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.840 [2024-12-16 12:48:15.832583] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:36:08.840 [2024-12-16 12:48:15.832689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.840 [2024-12-16 12:48:15.832698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:36:08.840 [2024-12-16 12:48:15.832705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.904 ms 00:36:08.840 [2024-12-16 12:48:15.832711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.840 [2024-12-16 12:48:15.834458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.840 [2024-12-16 12:48:15.834582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:36:08.840 [2024-12-16 12:48:15.834594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.732 ms 00:36:08.840 [2024-12-16 12:48:15.834600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.840 [2024-12-16 12:48:15.834673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.840 [2024-12-16 12:48:15.834682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:36:08.840 [2024-12-16 12:48:15.834688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:36:08.840 [2024-12-16 12:48:15.834695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.840 [2024-12-16 12:48:15.834726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.840 [2024-12-16 12:48:15.834733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:36:08.840 [2024-12-16 12:48:15.834740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:36:08.840 [2024-12-16 12:48:15.834747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.840 [2024-12-16 12:48:15.834772] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:36:08.840 [2024-12-16 12:48:15.834780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.840 [2024-12-16 12:48:15.834786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:36:08.840 [2024-12-16 12:48:15.834792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:36:08.840 [2024-12-16 12:48:15.834799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.840 [2024-12-16 12:48:15.854046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.840 [2024-12-16 12:48:15.854151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:36:08.840 [2024-12-16 12:48:15.854172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.230 ms 00:36:08.840 [2024-12-16 12:48:15.854179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.840 [2024-12-16 12:48:15.854232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.840 [2024-12-16 12:48:15.854240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:36:08.840 [2024-12-16 12:48:15.854247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:36:08.840 [2024-12-16 12:48:15.854253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.840 [2024-12-16 12:48:15.855088] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 122.108 ms, result 0 00:36:09.780  [2024-12-16T12:48:18.271Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-16T12:48:19.212Z] Copying: 42/1024 [MB] (19 MBps) [2024-12-16T12:48:20.156Z] Copying: 65/1024 [MB] (22 MBps) [2024-12-16T12:48:21.100Z] Copying: 85/1024 [MB] (20 MBps) [2024-12-16T12:48:22.044Z] Copying: 103/1024 [MB] (17 MBps) [2024-12-16T12:48:22.988Z] Copying: 125/1024 [MB] (21 MBps) [2024-12-16T12:48:23.934Z] Copying: 148/1024 [MB] (23 MBps) [2024-12-16T12:48:24.878Z] Copying: 168/1024 [MB] (19 MBps) [2024-12-16T12:48:26.265Z] Copying: 189/1024 [MB] (21 MBps) [2024-12-16T12:48:27.209Z] Copying: 210/1024 [MB] (20 MBps) [2024-12-16T12:48:28.154Z] Copying: 233/1024 [MB] (22 MBps) [2024-12-16T12:48:29.100Z] Copying: 254/1024 [MB] (21 MBps) [2024-12-16T12:48:30.043Z] Copying: 275/1024 [MB] (20 MBps) [2024-12-16T12:48:30.985Z] Copying: 290/1024 [MB] (15 MBps) [2024-12-16T12:48:31.927Z] Copying: 301/1024 [MB] (10 MBps) [2024-12-16T12:48:32.872Z] Copying: 312/1024 [MB] (11 MBps) [2024-12-16T12:48:33.883Z] Copying: 324/1024 [MB] (11 MBps) [2024-12-16T12:48:35.269Z] Copying: 335/1024 [MB] (11 MBps) [2024-12-16T12:48:36.214Z] Copying: 347/1024 [MB] (11 MBps) [2024-12-16T12:48:37.158Z] Copying: 358/1024 [MB] (11 MBps) [2024-12-16T12:48:38.104Z] Copying: 370/1024 [MB] (11 MBps) [2024-12-16T12:48:39.048Z] Copying: 382/1024 [MB] (11 MBps) [2024-12-16T12:48:39.993Z] Copying: 396/1024 [MB] (14 MBps) [2024-12-16T12:48:40.936Z] Copying: 408/1024 [MB] (12 MBps) [2024-12-16T12:48:41.879Z] Copying: 419/1024 [MB] (10 MBps) [2024-12-16T12:48:43.265Z] Copying: 430/1024 [MB] (11 MBps) [2024-12-16T12:48:44.210Z] Copying: 441/1024 [MB] (11 MBps) [2024-12-16T12:48:45.154Z] Copying: 452/1024 [MB] (11 MBps) [2024-12-16T12:48:46.098Z] Copying: 463/1024 [MB] (10 MBps) [2024-12-16T12:48:47.041Z] Copying: 474/1024 [MB] (10 MBps) [2024-12-16T12:48:47.986Z] Copying: 485/1024 [MB] (10 MBps) [2024-12-16T12:48:48.927Z] Copying: 496/1024 [MB] (10 MBps) [2024-12-16T12:48:49.872Z] Copying: 507/1024 [MB] (10 MBps) [2024-12-16T12:48:51.261Z] Copying: 517/1024 [MB] (10 MBps) [2024-12-16T12:48:52.205Z] Copying: 527/1024 [MB] (10 MBps) [2024-12-16T12:48:53.149Z] Copying: 538/1024 [MB] (10 MBps) [2024-12-16T12:48:54.094Z] Copying: 549/1024 [MB] (11 MBps) [2024-12-16T12:48:55.038Z] Copying: 560/1024 [MB] (11 MBps) [2024-12-16T12:48:55.980Z] Copying: 571/1024 [MB] (11 MBps) [2024-12-16T12:48:56.924Z] Copying: 583/1024 [MB] (11 MBps) [2024-12-16T12:48:57.869Z] Copying: 594/1024 [MB] (11 MBps) [2024-12-16T12:48:59.257Z] Copying: 605/1024 [MB] (11 MBps) [2024-12-16T12:49:00.200Z] Copying: 617/1024 [MB] (11 MBps) [2024-12-16T12:49:01.143Z] Copying: 629/1024 [MB] (11 MBps) [2024-12-16T12:49:02.087Z] Copying: 640/1024 [MB] (11 MBps) [2024-12-16T12:49:03.092Z] Copying: 652/1024 [MB] (11 MBps) [2024-12-16T12:49:04.036Z] Copying: 663/1024 [MB] (11 MBps) [2024-12-16T12:49:04.981Z] Copying: 675/1024 [MB] (11 MBps) [2024-12-16T12:49:05.926Z] Copying: 686/1024 [MB] (11 MBps) [2024-12-16T12:49:06.871Z] Copying: 697/1024 [MB] (11 MBps) [2024-12-16T12:49:08.258Z] Copying: 708/1024 [MB] (11 MBps) [2024-12-16T12:49:09.203Z] Copying: 719/1024 [MB] (11 MBps) [2024-12-16T12:49:10.148Z] Copying: 731/1024 [MB] (11 MBps) [2024-12-16T12:49:11.092Z] Copying: 742/1024 [MB] (11 MBps) [2024-12-16T12:49:12.036Z] Copying: 753/1024 [MB] (11 MBps) [2024-12-16T12:49:12.983Z] Copying: 764/1024 [MB] (11 MBps) [2024-12-16T12:49:13.927Z] Copying: 776/1024 [MB] (11 MBps) [2024-12-16T12:49:14.872Z] Copying: 787/1024 [MB] (11 MBps) [2024-12-16T12:49:16.258Z] Copying: 798/1024 [MB] (10 MBps) [2024-12-16T12:49:17.201Z] Copying: 809/1024 [MB] (10 MBps) [2024-12-16T12:49:18.147Z] Copying: 820/1024 [MB] (11 MBps) [2024-12-16T12:49:19.092Z] Copying: 831/1024 [MB] (11 MBps) [2024-12-16T12:49:20.035Z] Copying: 842/1024 [MB] (11 MBps) [2024-12-16T12:49:20.978Z] Copying: 854/1024 [MB] (11 MBps) [2024-12-16T12:49:21.922Z] Copying: 865/1024 [MB] (11 MBps) [2024-12-16T12:49:23.310Z] Copying: 876/1024 [MB] (11 MBps) [2024-12-16T12:49:23.884Z] Copying: 887/1024 [MB] (11 MBps) [2024-12-16T12:49:25.273Z] Copying: 899/1024 [MB] (11 MBps) [2024-12-16T12:49:26.217Z] Copying: 910/1024 [MB] (11 MBps) [2024-12-16T12:49:27.160Z] Copying: 922/1024 [MB] (11 MBps) [2024-12-16T12:49:28.104Z] Copying: 939/1024 [MB] (17 MBps) [2024-12-16T12:49:29.048Z] Copying: 949/1024 [MB] (10 MBps) [2024-12-16T12:49:29.992Z] Copying: 982864/1048576 [kB] (10080 kBps) [2024-12-16T12:49:30.933Z] Copying: 992832/1048576 [kB] (9968 kBps) [2024-12-16T12:49:31.928Z] Copying: 980/1024 [MB] (10 MBps) [2024-12-16T12:49:32.872Z] Copying: 991/1024 [MB] (11 MBps) [2024-12-16T12:49:34.261Z] Copying: 1003/1024 [MB] (11 MBps) [2024-12-16T12:49:34.834Z] Copying: 1014/1024 [MB] (11 MBps) [2024-12-16T12:49:34.834Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-12-16 12:49:34.710073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:27.728 [2024-12-16 12:49:34.710122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:37:27.728 [2024-12-16 12:49:34.710135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:37:27.728 [2024-12-16 12:49:34.710146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:27.728 [2024-12-16 12:49:34.710213] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:37:27.728 [2024-12-16 12:49:34.712463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:27.728 [2024-12-16 12:49:34.712597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:37:27.728 [2024-12-16 12:49:34.712611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.234 ms 00:37:27.728 [2024-12-16 12:49:34.712618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:27.728 [2024-12-16 12:49:34.715218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:27.728 [2024-12-16 12:49:34.715243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:37:27.728 [2024-12-16 12:49:34.715252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.581 ms 00:37:27.728 [2024-12-16 12:49:34.715259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:27.728 [2024-12-16 12:49:34.715286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:27.728 [2024-12-16 12:49:34.715293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:37:27.728 [2024-12-16 12:49:34.715300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:37:27.728 [2024-12-16 12:49:34.715306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:27.728 [2024-12-16 12:49:34.715346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:27.728 [2024-12-16 12:49:34.715353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:37:27.728 [2024-12-16 12:49:34.715360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:37:27.728 [2024-12-16 12:49:34.715366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:27.728 [2024-12-16 12:49:34.715376] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:37:27.728 [2024-12-16 12:49:34.715388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 512 / 261120 wr_cnt: 1 state: open 00:37:27.728 [2024-12-16 12:49:34.715396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:37:27.728 [2024-12-16 12:49:34.715607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:37:27.729 [2024-12-16 12:49:34.715983] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:37:27.729 [2024-12-16 12:49:34.715989] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eabbca9e-a16b-4a5a-9afb-819f4331b49c 00:37:27.729 [2024-12-16 12:49:34.715994] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 512 00:37:27.729 [2024-12-16 12:49:34.716000] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 544 00:37:27.729 [2024-12-16 12:49:34.716005] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 512 00:37:27.729 [2024-12-16 12:49:34.716011] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0625 00:37:27.729 [2024-12-16 12:49:34.716016] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:37:27.729 [2024-12-16 12:49:34.716022] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:37:27.729 [2024-12-16 12:49:34.716028] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:37:27.729 [2024-12-16 12:49:34.716033] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:37:27.729 [2024-12-16 12:49:34.716038] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:37:27.729 [2024-12-16 12:49:34.716044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:27.729 [2024-12-16 12:49:34.716049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:37:27.729 [2024-12-16 12:49:34.716057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:37:27.729 [2024-12-16 12:49:34.716063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:27.729 [2024-12-16 12:49:34.726439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:27.729 [2024-12-16 12:49:34.726467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:37:27.729 [2024-12-16 12:49:34.726476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.365 ms 00:37:27.729 [2024-12-16 12:49:34.726482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:27.729 [2024-12-16 12:49:34.726776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:27.729 [2024-12-16 12:49:34.726790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:37:27.729 [2024-12-16 12:49:34.726797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:37:27.729 [2024-12-16 12:49:34.726803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:27.729 [2024-12-16 12:49:34.754179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:27.729 [2024-12-16 12:49:34.754203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:27.729 [2024-12-16 12:49:34.754211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:27.729 [2024-12-16 12:49:34.754217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:27.729 [2024-12-16 12:49:34.754268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:27.729 [2024-12-16 12:49:34.754274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:27.729 [2024-12-16 12:49:34.754281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:27.729 [2024-12-16 12:49:34.754286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:27.729 [2024-12-16 12:49:34.754332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:27.729 [2024-12-16 12:49:34.754340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:27.729 [2024-12-16 12:49:34.754346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:27.729 [2024-12-16 12:49:34.754352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:27.729 [2024-12-16 12:49:34.754363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:27.730 [2024-12-16 12:49:34.754371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:27.730 [2024-12-16 12:49:34.754377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:27.730 [2024-12-16 12:49:34.754383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:27.730 [2024-12-16 12:49:34.816999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:27.730 [2024-12-16 12:49:34.817030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:27.730 [2024-12-16 12:49:34.817039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:27.730 [2024-12-16 12:49:34.817045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:27.991 [2024-12-16 12:49:34.867873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:27.991 [2024-12-16 12:49:34.867984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:27.991 [2024-12-16 12:49:34.867992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:27.991 [2024-12-16 12:49:34.867998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:27.991 [2024-12-16 12:49:34.868045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:27.991 [2024-12-16 12:49:34.868056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:27.991 [2024-12-16 12:49:34.868063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:27.991 [2024-12-16 12:49:34.868069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:27.991 [2024-12-16 12:49:34.868116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:27.991 [2024-12-16 12:49:34.868123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:27.991 [2024-12-16 12:49:34.868131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:27.991 [2024-12-16 12:49:34.868138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:27.991 [2024-12-16 12:49:34.868214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:27.991 [2024-12-16 12:49:34.868223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:27.991 [2024-12-16 12:49:34.868229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:27.991 [2024-12-16 12:49:34.868235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:27.991 [2024-12-16 12:49:34.868261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:27.991 [2024-12-16 12:49:34.868268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:37:27.991 [2024-12-16 12:49:34.868274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:27.991 [2024-12-16 12:49:34.868282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:27.991 [2024-12-16 12:49:34.868314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:27.991 [2024-12-16 12:49:34.868322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:27.991 [2024-12-16 12:49:34.868328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:27.991 [2024-12-16 12:49:34.868334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:27.991 [2024-12-16 12:49:34.868371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:27.991 [2024-12-16 12:49:34.868378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:27.991 [2024-12-16 12:49:34.868387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:27.991 [2024-12-16 12:49:34.868393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:27.991 [2024-12-16 12:49:34.868496] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 158.397 ms, result 0 00:37:28.561 00:37:28.561 00:37:28.561 12:49:35 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:37:28.561 [2024-12-16 12:49:35.546992] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:28.561 [2024-12-16 12:49:35.547114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89707 ] 00:37:28.822 [2024-12-16 12:49:35.702269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:28.822 [2024-12-16 12:49:35.788708] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:29.085 [2024-12-16 12:49:36.021172] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:37:29.085 [2024-12-16 12:49:36.021231] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:37:29.085 [2024-12-16 12:49:36.176986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.085 [2024-12-16 12:49:36.177026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:37:29.085 [2024-12-16 12:49:36.177038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:37:29.085 [2024-12-16 12:49:36.177044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.085 [2024-12-16 12:49:36.177084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.085 [2024-12-16 12:49:36.177094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:29.085 [2024-12-16 12:49:36.177101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:37:29.085 [2024-12-16 12:49:36.177107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.085 [2024-12-16 12:49:36.177120] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:37:29.085 [2024-12-16 12:49:36.177702] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:37:29.085 [2024-12-16 12:49:36.177722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.085 [2024-12-16 12:49:36.177729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:29.085 [2024-12-16 12:49:36.177735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:37:29.085 [2024-12-16 12:49:36.177741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.085 [2024-12-16 12:49:36.177954] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:37:29.085 [2024-12-16 12:49:36.177973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.085 [2024-12-16 12:49:36.177982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:37:29.085 [2024-12-16 12:49:36.177989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:37:29.085 [2024-12-16 12:49:36.177995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.085 [2024-12-16 12:49:36.178059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.085 [2024-12-16 12:49:36.178067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:37:29.085 [2024-12-16 12:49:36.178074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:37:29.085 [2024-12-16 12:49:36.178080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.085 [2024-12-16 12:49:36.178294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.085 [2024-12-16 12:49:36.178304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:29.085 [2024-12-16 12:49:36.178310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:37:29.085 [2024-12-16 12:49:36.178317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.085 [2024-12-16 12:49:36.178369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.085 [2024-12-16 12:49:36.178377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:29.085 [2024-12-16 12:49:36.178383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:37:29.085 [2024-12-16 12:49:36.178389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.085 [2024-12-16 12:49:36.178407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.085 [2024-12-16 12:49:36.178414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:37:29.085 [2024-12-16 12:49:36.178422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:37:29.085 [2024-12-16 12:49:36.178428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.085 [2024-12-16 12:49:36.178442] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:37:29.085 [2024-12-16 12:49:36.181676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.085 [2024-12-16 12:49:36.181702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:29.085 [2024-12-16 12:49:36.181709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.237 ms 00:37:29.085 [2024-12-16 12:49:36.181715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.085 [2024-12-16 12:49:36.181746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.085 [2024-12-16 12:49:36.181752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:37:29.085 [2024-12-16 12:49:36.181758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:37:29.085 [2024-12-16 12:49:36.181764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.085 [2024-12-16 12:49:36.181797] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:37:29.085 [2024-12-16 12:49:36.181814] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:37:29.085 [2024-12-16 12:49:36.181845] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:37:29.085 [2024-12-16 12:49:36.181858] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:37:29.085 [2024-12-16 12:49:36.181940] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:37:29.085 [2024-12-16 12:49:36.181948] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:37:29.085 [2024-12-16 12:49:36.181956] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:37:29.085 [2024-12-16 12:49:36.181965] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:37:29.085 [2024-12-16 12:49:36.181971] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:37:29.085 [2024-12-16 12:49:36.181980] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:37:29.085 [2024-12-16 12:49:36.181985] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:37:29.085 [2024-12-16 12:49:36.181991] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:37:29.085 [2024-12-16 12:49:36.181997] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:37:29.085 [2024-12-16 12:49:36.182003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.085 [2024-12-16 12:49:36.182008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:37:29.085 [2024-12-16 12:49:36.182014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:37:29.085 [2024-12-16 12:49:36.182020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.085 [2024-12-16 12:49:36.182083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.085 [2024-12-16 12:49:36.182090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:37:29.085 [2024-12-16 12:49:36.182095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:37:29.085 [2024-12-16 12:49:36.182103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.085 [2024-12-16 12:49:36.182186] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:37:29.085 [2024-12-16 12:49:36.182195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:37:29.085 [2024-12-16 12:49:36.182202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:29.085 [2024-12-16 12:49:36.182207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:29.085 [2024-12-16 12:49:36.182215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:37:29.085 [2024-12-16 12:49:36.182220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:37:29.085 [2024-12-16 12:49:36.182226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:37:29.085 [2024-12-16 12:49:36.182233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:37:29.085 [2024-12-16 12:49:36.182240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:37:29.085 [2024-12-16 12:49:36.182245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:29.085 [2024-12-16 12:49:36.182250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:37:29.086 [2024-12-16 12:49:36.182256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:37:29.086 [2024-12-16 12:49:36.182260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:29.086 [2024-12-16 12:49:36.182266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:37:29.086 [2024-12-16 12:49:36.182271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:37:29.086 [2024-12-16 12:49:36.182281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:29.086 [2024-12-16 12:49:36.182286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:37:29.086 [2024-12-16 12:49:36.182292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:37:29.086 [2024-12-16 12:49:36.182297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:29.086 [2024-12-16 12:49:36.182303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:37:29.086 [2024-12-16 12:49:36.182308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:37:29.086 [2024-12-16 12:49:36.182313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:29.086 [2024-12-16 12:49:36.182318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:37:29.086 [2024-12-16 12:49:36.182324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:37:29.086 [2024-12-16 12:49:36.182329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:29.086 [2024-12-16 12:49:36.182334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:37:29.086 [2024-12-16 12:49:36.182339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:37:29.086 [2024-12-16 12:49:36.182344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:29.086 [2024-12-16 12:49:36.182349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:37:29.086 [2024-12-16 12:49:36.182354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:37:29.086 [2024-12-16 12:49:36.182358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:29.086 [2024-12-16 12:49:36.182363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:37:29.086 [2024-12-16 12:49:36.182368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:37:29.086 [2024-12-16 12:49:36.182373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:29.086 [2024-12-16 12:49:36.182378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:37:29.086 [2024-12-16 12:49:36.182383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:37:29.086 [2024-12-16 12:49:36.182388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:29.086 [2024-12-16 12:49:36.182393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:37:29.086 [2024-12-16 12:49:36.182398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:37:29.086 [2024-12-16 12:49:36.182403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:29.086 [2024-12-16 12:49:36.182409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:37:29.086 [2024-12-16 12:49:36.182414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:37:29.086 [2024-12-16 12:49:36.182420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:29.086 [2024-12-16 12:49:36.182425] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:37:29.086 [2024-12-16 12:49:36.182431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:37:29.086 [2024-12-16 12:49:36.182437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:29.086 [2024-12-16 12:49:36.182446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:29.086 [2024-12-16 12:49:36.182454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:37:29.086 [2024-12-16 12:49:36.182459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:37:29.086 [2024-12-16 12:49:36.182464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:37:29.086 [2024-12-16 12:49:36.182470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:37:29.086 [2024-12-16 12:49:36.182474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:37:29.086 [2024-12-16 12:49:36.182479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:37:29.086 [2024-12-16 12:49:36.182486] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:37:29.086 [2024-12-16 12:49:36.182493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:29.086 [2024-12-16 12:49:36.182499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:37:29.086 [2024-12-16 12:49:36.182505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:37:29.086 [2024-12-16 12:49:36.182511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:37:29.086 [2024-12-16 12:49:36.182517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:37:29.086 [2024-12-16 12:49:36.182522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:37:29.086 [2024-12-16 12:49:36.182528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:37:29.086 [2024-12-16 12:49:36.182533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:37:29.086 [2024-12-16 12:49:36.182538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:37:29.086 [2024-12-16 12:49:36.182543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:37:29.086 [2024-12-16 12:49:36.182549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:37:29.086 [2024-12-16 12:49:36.182554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:37:29.086 [2024-12-16 12:49:36.182559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:37:29.086 [2024-12-16 12:49:36.182564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:37:29.086 [2024-12-16 12:49:36.182570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:37:29.086 [2024-12-16 12:49:36.182575] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:37:29.086 [2024-12-16 12:49:36.182582] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:29.086 [2024-12-16 12:49:36.182588] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:29.086 [2024-12-16 12:49:36.182596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:37:29.086 [2024-12-16 12:49:36.182601] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:37:29.086 [2024-12-16 12:49:36.182607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:37:29.086 [2024-12-16 12:49:36.182612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.086 [2024-12-16 12:49:36.182618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:37:29.086 [2024-12-16 12:49:36.182624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.491 ms 00:37:29.086 [2024-12-16 12:49:36.182630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.348 [2024-12-16 12:49:36.203515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.348 [2024-12-16 12:49:36.203541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:29.348 [2024-12-16 12:49:36.203550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.854 ms 00:37:29.348 [2024-12-16 12:49:36.203556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.348 [2024-12-16 12:49:36.203616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.348 [2024-12-16 12:49:36.203622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:37:29.348 [2024-12-16 12:49:36.203631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:37:29.348 [2024-12-16 12:49:36.203637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.348 [2024-12-16 12:49:36.244469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.348 [2024-12-16 12:49:36.244500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:29.348 [2024-12-16 12:49:36.244510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.795 ms 00:37:29.348 [2024-12-16 12:49:36.244516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.348 [2024-12-16 12:49:36.244551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.348 [2024-12-16 12:49:36.244560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:29.348 [2024-12-16 12:49:36.244566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:37:29.348 [2024-12-16 12:49:36.244572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.348 [2024-12-16 12:49:36.244649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.348 [2024-12-16 12:49:36.244658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:29.348 [2024-12-16 12:49:36.244665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:37:29.348 [2024-12-16 12:49:36.244671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.348 [2024-12-16 12:49:36.244769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.348 [2024-12-16 12:49:36.244778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:29.348 [2024-12-16 12:49:36.244784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:37:29.348 [2024-12-16 12:49:36.244790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.349 [2024-12-16 12:49:36.256668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.349 [2024-12-16 12:49:36.256693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:29.349 [2024-12-16 12:49:36.256701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.863 ms 00:37:29.349 [2024-12-16 12:49:36.256708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.349 [2024-12-16 12:49:36.256799] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:37:29.349 [2024-12-16 12:49:36.256809] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:37:29.349 [2024-12-16 12:49:36.256816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.349 [2024-12-16 12:49:36.256824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:37:29.349 [2024-12-16 12:49:36.256830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:37:29.349 [2024-12-16 12:49:36.256836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.349 [2024-12-16 12:49:36.265985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.349 [2024-12-16 12:49:36.266008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:37:29.349 [2024-12-16 12:49:36.266016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.138 ms 00:37:29.349 [2024-12-16 12:49:36.266023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.349 [2024-12-16 12:49:36.266119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.349 [2024-12-16 12:49:36.266127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:37:29.349 [2024-12-16 12:49:36.266134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:37:29.349 [2024-12-16 12:49:36.266143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.349 [2024-12-16 12:49:36.266176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.349 [2024-12-16 12:49:36.266184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:37:29.349 [2024-12-16 12:49:36.266191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:37:29.349 [2024-12-16 12:49:36.266203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.349 [2024-12-16 12:49:36.266643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.349 [2024-12-16 12:49:36.266659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:37:29.349 [2024-12-16 12:49:36.266666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:37:29.349 [2024-12-16 12:49:36.266672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.349 [2024-12-16 12:49:36.266686] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:37:29.349 [2024-12-16 12:49:36.266694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.349 [2024-12-16 12:49:36.266701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:37:29.349 [2024-12-16 12:49:36.266706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:37:29.349 [2024-12-16 12:49:36.266712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.349 [2024-12-16 12:49:36.276141] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:37:29.349 [2024-12-16 12:49:36.276260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.349 [2024-12-16 12:49:36.276269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:37:29.349 [2024-12-16 12:49:36.276276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.534 ms 00:37:29.349 [2024-12-16 12:49:36.276282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.349 [2024-12-16 12:49:36.278034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.349 [2024-12-16 12:49:36.278057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:37:29.349 [2024-12-16 12:49:36.278068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.726 ms 00:37:29.349 [2024-12-16 12:49:36.278074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.349 [2024-12-16 12:49:36.278130] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:37:29.349 [2024-12-16 12:49:36.278174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.349 [2024-12-16 12:49:36.278181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:37:29.349 [2024-12-16 12:49:36.278188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:37:29.349 [2024-12-16 12:49:36.278194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.349 [2024-12-16 12:49:36.278224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.349 [2024-12-16 12:49:36.278232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:37:29.349 [2024-12-16 12:49:36.278239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:37:29.349 [2024-12-16 12:49:36.278245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.349 [2024-12-16 12:49:36.278272] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:37:29.349 [2024-12-16 12:49:36.278280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.349 [2024-12-16 12:49:36.278286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:37:29.349 [2024-12-16 12:49:36.278293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:37:29.349 [2024-12-16 12:49:36.278298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.349 [2024-12-16 12:49:36.297874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.349 [2024-12-16 12:49:36.297900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:37:29.349 [2024-12-16 12:49:36.297909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.561 ms 00:37:29.349 [2024-12-16 12:49:36.297915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.349 [2024-12-16 12:49:36.297969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:29.349 [2024-12-16 12:49:36.297977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:37:29.349 [2024-12-16 12:49:36.297984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:37:29.349 [2024-12-16 12:49:36.297990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:29.349 [2024-12-16 12:49:36.299402] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 122.049 ms, result 0 00:37:30.737  [2024-12-16T12:49:38.787Z] Copying: 888/1048576 [kB] (888 kBps) [2024-12-16T12:49:39.729Z] Copying: 1788/1048576 [kB] (900 kBps) [2024-12-16T12:49:40.674Z] Copying: 5136/1048576 [kB] (3348 kBps) [2024-12-16T12:49:41.619Z] Copying: 21/1024 [MB] (16 MBps) [2024-12-16T12:49:42.564Z] Copying: 39/1024 [MB] (17 MBps) [2024-12-16T12:49:43.509Z] Copying: 56/1024 [MB] (17 MBps) [2024-12-16T12:49:44.898Z] Copying: 74/1024 [MB] (17 MBps) [2024-12-16T12:49:45.843Z] Copying: 91/1024 [MB] (17 MBps) [2024-12-16T12:49:46.788Z] Copying: 109/1024 [MB] (17 MBps) [2024-12-16T12:49:47.750Z] Copying: 127/1024 [MB] (17 MBps) [2024-12-16T12:49:48.695Z] Copying: 144/1024 [MB] (17 MBps) [2024-12-16T12:49:49.639Z] Copying: 160/1024 [MB] (15 MBps) [2024-12-16T12:49:50.584Z] Copying: 175/1024 [MB] (15 MBps) [2024-12-16T12:49:51.528Z] Copying: 193/1024 [MB] (17 MBps) [2024-12-16T12:49:52.917Z] Copying: 210/1024 [MB] (17 MBps) [2024-12-16T12:49:53.866Z] Copying: 228/1024 [MB] (17 MBps) [2024-12-16T12:49:54.809Z] Copying: 246/1024 [MB] (17 MBps) [2024-12-16T12:49:55.760Z] Copying: 263/1024 [MB] (17 MBps) [2024-12-16T12:49:56.747Z] Copying: 281/1024 [MB] (17 MBps) [2024-12-16T12:49:57.691Z] Copying: 298/1024 [MB] (17 MBps) [2024-12-16T12:49:58.635Z] Copying: 316/1024 [MB] (17 MBps) [2024-12-16T12:49:59.579Z] Copying: 334/1024 [MB] (17 MBps) [2024-12-16T12:50:00.523Z] Copying: 351/1024 [MB] (17 MBps) [2024-12-16T12:50:01.910Z] Copying: 369/1024 [MB] (17 MBps) [2024-12-16T12:50:02.855Z] Copying: 386/1024 [MB] (17 MBps) [2024-12-16T12:50:03.799Z] Copying: 405/1024 [MB] (18 MBps) [2024-12-16T12:50:04.744Z] Copying: 422/1024 [MB] (17 MBps) [2024-12-16T12:50:05.691Z] Copying: 440/1024 [MB] (17 MBps) [2024-12-16T12:50:06.636Z] Copying: 457/1024 [MB] (17 MBps) [2024-12-16T12:50:07.581Z] Copying: 475/1024 [MB] (18 MBps) [2024-12-16T12:50:08.527Z] Copying: 494/1024 [MB] (18 MBps) [2024-12-16T12:50:09.916Z] Copying: 512/1024 [MB] (18 MBps) [2024-12-16T12:50:10.860Z] Copying: 530/1024 [MB] (17 MBps) [2024-12-16T12:50:11.806Z] Copying: 547/1024 [MB] (17 MBps) [2024-12-16T12:50:12.751Z] Copying: 565/1024 [MB] (17 MBps) [2024-12-16T12:50:13.697Z] Copying: 582/1024 [MB] (17 MBps) [2024-12-16T12:50:14.643Z] Copying: 600/1024 [MB] (17 MBps) [2024-12-16T12:50:15.593Z] Copying: 617/1024 [MB] (17 MBps) [2024-12-16T12:50:16.539Z] Copying: 635/1024 [MB] (17 MBps) [2024-12-16T12:50:17.930Z] Copying: 652/1024 [MB] (17 MBps) [2024-12-16T12:50:18.503Z] Copying: 670/1024 [MB] (17 MBps) [2024-12-16T12:50:19.892Z] Copying: 688/1024 [MB] (17 MBps) [2024-12-16T12:50:20.838Z] Copying: 706/1024 [MB] (18 MBps) [2024-12-16T12:50:21.782Z] Copying: 722/1024 [MB] (16 MBps) [2024-12-16T12:50:22.814Z] Copying: 741/1024 [MB] (18 MBps) [2024-12-16T12:50:23.758Z] Copying: 759/1024 [MB] (18 MBps) [2024-12-16T12:50:24.703Z] Copying: 777/1024 [MB] (18 MBps) [2024-12-16T12:50:25.651Z] Copying: 796/1024 [MB] (18 MBps) [2024-12-16T12:50:26.595Z] Copying: 815/1024 [MB] (18 MBps) [2024-12-16T12:50:27.541Z] Copying: 833/1024 [MB] (18 MBps) [2024-12-16T12:50:28.932Z] Copying: 850/1024 [MB] (16 MBps) [2024-12-16T12:50:29.505Z] Copying: 867/1024 [MB] (16 MBps) [2024-12-16T12:50:30.893Z] Copying: 885/1024 [MB] (18 MBps) [2024-12-16T12:50:31.836Z] Copying: 903/1024 [MB] (17 MBps) [2024-12-16T12:50:32.780Z] Copying: 920/1024 [MB] (17 MBps) [2024-12-16T12:50:33.726Z] Copying: 938/1024 [MB] (17 MBps) [2024-12-16T12:50:34.671Z] Copying: 954/1024 [MB] (16 MBps) [2024-12-16T12:50:35.618Z] Copying: 972/1024 [MB] (17 MBps) [2024-12-16T12:50:36.563Z] Copying: 989/1024 [MB] (17 MBps) [2024-12-16T12:50:37.507Z] Copying: 1007/1024 [MB] (17 MBps) [2024-12-16T12:50:37.768Z] Copying: 1023/1024 [MB] (15 MBps) [2024-12-16T12:50:38.031Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-16 12:50:37.886697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:30.925 [2024-12-16 12:50:37.886771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:38:30.926 [2024-12-16 12:50:37.886786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:38:30.926 [2024-12-16 12:50:37.886793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:30.926 [2024-12-16 12:50:37.886811] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:30.926 [2024-12-16 12:50:37.889423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:30.926 [2024-12-16 12:50:37.889461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:38:30.926 [2024-12-16 12:50:37.889470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.598 ms 00:38:30.926 [2024-12-16 12:50:37.889477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:30.926 [2024-12-16 12:50:37.889658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:30.926 [2024-12-16 12:50:37.889667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:38:30.926 [2024-12-16 12:50:37.889674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:38:30.926 [2024-12-16 12:50:37.889681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:30.926 [2024-12-16 12:50:37.889706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:30.926 [2024-12-16 12:50:37.889714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:38:30.926 [2024-12-16 12:50:37.889720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:38:30.926 [2024-12-16 12:50:37.889730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:30.926 [2024-12-16 12:50:37.889833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:30.926 [2024-12-16 12:50:37.889840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:38:30.926 [2024-12-16 12:50:37.889847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:38:30.926 [2024-12-16 12:50:37.889854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:30.926 [2024-12-16 12:50:37.889865] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:38:30.926 [2024-12-16 12:50:37.889876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:38:30.926 [2024-12-16 12:50:37.889884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 2048 / 261120 wr_cnt: 1 state: open 00:38:30.926 [2024-12-16 12:50:37.889890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.889896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.889902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.889908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.889914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.889919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.889925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.889932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.889938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.889944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.889949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.889955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.889960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.889966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.889971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.889978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.889983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.889989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.889997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:38:30.926 [2024-12-16 12:50:37.890253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:38:30.927 [2024-12-16 12:50:37.890494] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:38:30.927 [2024-12-16 12:50:37.890501] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eabbca9e-a16b-4a5a-9afb-819f4331b49c 00:38:30.927 [2024-12-16 12:50:37.890506] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 263168 00:38:30.927 [2024-12-16 12:50:37.890516] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 263712 00:38:30.927 [2024-12-16 12:50:37.890523] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 262656 00:38:30.927 [2024-12-16 12:50:37.890529] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0040 00:38:30.927 [2024-12-16 12:50:37.890534] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:38:30.927 [2024-12-16 12:50:37.890540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:38:30.927 [2024-12-16 12:50:37.890546] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:38:30.927 [2024-12-16 12:50:37.890550] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:38:30.927 [2024-12-16 12:50:37.890555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:38:30.927 [2024-12-16 12:50:37.890560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:30.927 [2024-12-16 12:50:37.890565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:38:30.927 [2024-12-16 12:50:37.890571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:38:30.927 [2024-12-16 12:50:37.890578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:30.927 [2024-12-16 12:50:37.901677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:30.927 [2024-12-16 12:50:37.901709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:38:30.927 [2024-12-16 12:50:37.901717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.085 ms 00:38:30.927 [2024-12-16 12:50:37.901724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:30.927 [2024-12-16 12:50:37.902022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:30.927 [2024-12-16 12:50:37.902036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:38:30.927 [2024-12-16 12:50:37.902043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:38:30.927 [2024-12-16 12:50:37.902049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:30.927 [2024-12-16 12:50:37.929921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:30.927 [2024-12-16 12:50:37.929950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:30.927 [2024-12-16 12:50:37.929959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:30.927 [2024-12-16 12:50:37.929965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:30.927 [2024-12-16 12:50:37.930017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:30.927 [2024-12-16 12:50:37.930024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:30.927 [2024-12-16 12:50:37.930030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:30.927 [2024-12-16 12:50:37.930037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:30.927 [2024-12-16 12:50:37.930080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:30.927 [2024-12-16 12:50:37.930088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:30.927 [2024-12-16 12:50:37.930095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:30.927 [2024-12-16 12:50:37.930101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:30.927 [2024-12-16 12:50:37.930113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:30.927 [2024-12-16 12:50:37.930120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:30.927 [2024-12-16 12:50:37.930126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:30.927 [2024-12-16 12:50:37.930132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:30.927 [2024-12-16 12:50:37.994278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:30.927 [2024-12-16 12:50:37.994313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:30.927 [2024-12-16 12:50:37.994321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:30.927 [2024-12-16 12:50:37.994332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:31.190 [2024-12-16 12:50:38.045590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:31.190 [2024-12-16 12:50:38.045624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:31.190 [2024-12-16 12:50:38.045633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:31.190 [2024-12-16 12:50:38.045639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:31.190 [2024-12-16 12:50:38.045713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:31.190 [2024-12-16 12:50:38.045722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:31.190 [2024-12-16 12:50:38.045728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:31.190 [2024-12-16 12:50:38.045734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:31.190 [2024-12-16 12:50:38.045763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:31.190 [2024-12-16 12:50:38.045770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:31.190 [2024-12-16 12:50:38.045776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:31.190 [2024-12-16 12:50:38.045783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:31.190 [2024-12-16 12:50:38.045842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:31.190 [2024-12-16 12:50:38.045852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:31.190 [2024-12-16 12:50:38.045858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:31.190 [2024-12-16 12:50:38.045864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:31.190 [2024-12-16 12:50:38.045887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:31.190 [2024-12-16 12:50:38.045894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:38:31.190 [2024-12-16 12:50:38.045901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:31.190 [2024-12-16 12:50:38.045907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:31.190 [2024-12-16 12:50:38.045939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:31.190 [2024-12-16 12:50:38.045948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:31.190 [2024-12-16 12:50:38.045955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:31.190 [2024-12-16 12:50:38.045961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:31.190 [2024-12-16 12:50:38.045999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:31.190 [2024-12-16 12:50:38.046007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:31.190 [2024-12-16 12:50:38.046013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:31.190 [2024-12-16 12:50:38.046020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:31.190 [2024-12-16 12:50:38.046128] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 159.409 ms, result 0 00:38:31.764 00:38:31.764 00:38:31.764 12:50:38 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:38:33.680 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:38:33.680 12:50:40 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:38:33.680 12:50:40 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:38:33.680 12:50:40 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:38:33.942 12:50:40 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:38:33.942 12:50:40 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:38:33.942 Process with pid 86873 is not found 00:38:33.942 Remove shared memory files 00:38:33.942 12:50:40 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 86873 00:38:33.942 12:50:40 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 86873 ']' 00:38:33.942 12:50:40 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 86873 00:38:33.942 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (86873) - No such process 00:38:33.942 12:50:40 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 86873 is not found' 00:38:33.942 12:50:40 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:38:33.942 12:50:40 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:38:33.942 12:50:40 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:38:33.943 12:50:40 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_eabbca9e-a16b-4a5a-9afb-819f4331b49c_band_md /dev/hugepages/ftl_eabbca9e-a16b-4a5a-9afb-819f4331b49c_l2p_l1 /dev/hugepages/ftl_eabbca9e-a16b-4a5a-9afb-819f4331b49c_l2p_l2 /dev/hugepages/ftl_eabbca9e-a16b-4a5a-9afb-819f4331b49c_l2p_l2_ctx /dev/hugepages/ftl_eabbca9e-a16b-4a5a-9afb-819f4331b49c_nvc_md /dev/hugepages/ftl_eabbca9e-a16b-4a5a-9afb-819f4331b49c_p2l_pool /dev/hugepages/ftl_eabbca9e-a16b-4a5a-9afb-819f4331b49c_sb /dev/hugepages/ftl_eabbca9e-a16b-4a5a-9afb-819f4331b49c_sb_shm /dev/hugepages/ftl_eabbca9e-a16b-4a5a-9afb-819f4331b49c_trim_bitmap /dev/hugepages/ftl_eabbca9e-a16b-4a5a-9afb-819f4331b49c_trim_log /dev/hugepages/ftl_eabbca9e-a16b-4a5a-9afb-819f4331b49c_trim_md /dev/hugepages/ftl_eabbca9e-a16b-4a5a-9afb-819f4331b49c_vmap 00:38:33.943 12:50:40 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:38:33.943 12:50:40 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:38:33.943 12:50:40 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:38:33.943 ************************************ 00:38:33.943 END TEST ftl_restore_fast 00:38:33.943 ************************************ 00:38:33.943 00:38:33.943 real 5m48.321s 00:38:33.943 user 5m36.990s 00:38:33.943 sys 0m11.280s 00:38:33.943 12:50:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:33.943 12:50:40 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:38:33.943 Process with pid 76766 is not found 00:38:33.943 12:50:40 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:38:33.943 12:50:40 ftl -- ftl/ftl.sh@14 -- # killprocess 76766 00:38:33.943 12:50:40 ftl -- common/autotest_common.sh@954 -- # '[' -z 76766 ']' 00:38:33.943 12:50:40 ftl -- common/autotest_common.sh@958 -- # kill -0 76766 00:38:33.943 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76766) - No such process 00:38:33.943 12:50:40 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76766 is not found' 00:38:33.943 12:50:40 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:38:33.943 12:50:40 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=90380 00:38:33.943 12:50:40 ftl -- ftl/ftl.sh@20 -- # waitforlisten 90380 00:38:33.943 12:50:40 ftl -- common/autotest_common.sh@835 -- # '[' -z 90380 ']' 00:38:33.943 12:50:40 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:33.943 12:50:40 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:33.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:33.943 12:50:40 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:33.943 12:50:40 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:33.943 12:50:40 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:33.943 12:50:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:38:33.943 [2024-12-16 12:50:40.984221] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:38:33.943 [2024-12-16 12:50:40.984517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90380 ] 00:38:34.204 [2024-12-16 12:50:41.143956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.204 [2024-12-16 12:50:41.250055] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:34.776 12:50:41 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:34.776 12:50:41 ftl -- common/autotest_common.sh@868 -- # return 0 00:38:34.776 12:50:41 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:38:35.038 nvme0n1 00:38:35.038 12:50:42 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:38:35.038 12:50:42 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:38:35.038 12:50:42 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:38:35.300 12:50:42 ftl -- ftl/common.sh@28 -- # stores=12c14c6f-cdf9-42af-8116-0e68eee60646 00:38:35.300 12:50:42 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:38:35.300 12:50:42 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 12c14c6f-cdf9-42af-8116-0e68eee60646 00:38:35.561 12:50:42 ftl -- ftl/ftl.sh@23 -- # killprocess 90380 00:38:35.561 12:50:42 ftl -- common/autotest_common.sh@954 -- # '[' -z 90380 ']' 00:38:35.561 12:50:42 ftl -- common/autotest_common.sh@958 -- # kill -0 90380 00:38:35.561 12:50:42 ftl -- common/autotest_common.sh@959 -- # uname 00:38:35.561 12:50:42 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:35.561 12:50:42 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90380 00:38:35.561 killing process with pid 90380 00:38:35.561 12:50:42 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:35.561 12:50:42 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:35.561 12:50:42 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90380' 00:38:35.561 12:50:42 ftl -- common/autotest_common.sh@973 -- # kill 90380 00:38:35.561 12:50:42 ftl -- common/autotest_common.sh@978 -- # wait 90380 00:38:36.949 12:50:43 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:38:36.949 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:36.949 Waiting for block devices as requested 00:38:36.949 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:38:37.209 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:38:37.209 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:38:37.209 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:38:42.488 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:38:42.488 Remove shared memory files 00:38:42.488 12:50:49 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:38:42.488 12:50:49 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:38:42.488 12:50:49 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:38:42.488 12:50:49 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:38:42.488 12:50:49 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:38:42.488 12:50:49 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:38:42.488 12:50:49 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:38:42.488 ************************************ 00:38:42.488 END TEST ftl 00:38:42.488 ************************************ 00:38:42.488 00:38:42.488 real 20m51.688s 00:38:42.488 user 22m35.524s 00:38:42.488 sys 1m25.255s 00:38:42.488 12:50:49 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:42.488 12:50:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:38:42.488 12:50:49 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:42.488 12:50:49 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:42.488 12:50:49 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:42.488 12:50:49 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:42.488 12:50:49 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:42.488 12:50:49 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:42.489 12:50:49 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:42.489 12:50:49 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:42.489 12:50:49 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:42.489 12:50:49 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:42.489 12:50:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:42.489 12:50:49 -- common/autotest_common.sh@10 -- # set +x 00:38:42.489 12:50:49 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:42.489 12:50:49 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:42.489 12:50:49 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:42.489 12:50:49 -- common/autotest_common.sh@10 -- # set +x 00:38:43.899 INFO: APP EXITING 00:38:43.899 INFO: killing all VMs 00:38:43.899 INFO: killing vhost app 00:38:43.899 INFO: EXIT DONE 00:38:44.161 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:44.734 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:38:44.734 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:38:44.734 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:38:44.734 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:38:44.996 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:45.569 Cleaning 00:38:45.569 Removing: /var/run/dpdk/spdk0/config 00:38:45.569 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:45.569 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:45.569 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:45.569 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:45.569 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:45.569 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:45.569 Removing: /var/run/dpdk/spdk0 00:38:45.569 Removing: /var/run/dpdk/spdk_pid58740 00:38:45.569 Removing: /var/run/dpdk/spdk_pid58942 00:38:45.569 Removing: /var/run/dpdk/spdk_pid59160 00:38:45.569 Removing: /var/run/dpdk/spdk_pid59253 00:38:45.569 Removing: /var/run/dpdk/spdk_pid59293 00:38:45.569 Removing: /var/run/dpdk/spdk_pid59410 00:38:45.569 Removing: /var/run/dpdk/spdk_pid59428 00:38:45.569 Removing: /var/run/dpdk/spdk_pid59616 00:38:45.569 Removing: /var/run/dpdk/spdk_pid59708 00:38:45.569 Removing: /var/run/dpdk/spdk_pid59793 00:38:45.569 Removing: /var/run/dpdk/spdk_pid59904 00:38:45.569 Removing: /var/run/dpdk/spdk_pid59990 00:38:45.569 Removing: /var/run/dpdk/spdk_pid60029 00:38:45.569 Removing: /var/run/dpdk/spdk_pid60066 00:38:45.569 Removing: /var/run/dpdk/spdk_pid60136 00:38:45.569 Removing: /var/run/dpdk/spdk_pid60215 00:38:45.569 Removing: /var/run/dpdk/spdk_pid60651 00:38:45.569 Removing: /var/run/dpdk/spdk_pid60704 00:38:45.569 Removing: /var/run/dpdk/spdk_pid60757 00:38:45.569 Removing: /var/run/dpdk/spdk_pid60772 00:38:45.569 Removing: /var/run/dpdk/spdk_pid60863 00:38:45.569 Removing: /var/run/dpdk/spdk_pid60879 00:38:45.569 Removing: /var/run/dpdk/spdk_pid60976 00:38:45.569 Removing: /var/run/dpdk/spdk_pid60986 00:38:45.569 Removing: /var/run/dpdk/spdk_pid61045 00:38:45.569 Removing: /var/run/dpdk/spdk_pid61063 00:38:45.569 Removing: /var/run/dpdk/spdk_pid61116 00:38:45.569 Removing: /var/run/dpdk/spdk_pid61128 00:38:45.569 Removing: /var/run/dpdk/spdk_pid61288 00:38:45.569 Removing: /var/run/dpdk/spdk_pid61319 00:38:45.569 Removing: /var/run/dpdk/spdk_pid61408 00:38:45.569 Removing: /var/run/dpdk/spdk_pid61580 00:38:45.569 Removing: /var/run/dpdk/spdk_pid61664 00:38:45.569 Removing: /var/run/dpdk/spdk_pid61701 00:38:45.569 Removing: /var/run/dpdk/spdk_pid62135 00:38:45.569 Removing: /var/run/dpdk/spdk_pid62233 00:38:45.569 Removing: /var/run/dpdk/spdk_pid62342 00:38:45.569 Removing: /var/run/dpdk/spdk_pid62397 00:38:45.569 Removing: /var/run/dpdk/spdk_pid62417 00:38:45.569 Removing: /var/run/dpdk/spdk_pid62501 00:38:45.569 Removing: /var/run/dpdk/spdk_pid63118 00:38:45.569 Removing: /var/run/dpdk/spdk_pid63155 00:38:45.569 Removing: /var/run/dpdk/spdk_pid63616 00:38:45.569 Removing: /var/run/dpdk/spdk_pid63714 00:38:45.569 Removing: /var/run/dpdk/spdk_pid63823 00:38:45.569 Removing: /var/run/dpdk/spdk_pid63876 00:38:45.569 Removing: /var/run/dpdk/spdk_pid63896 00:38:45.569 Removing: /var/run/dpdk/spdk_pid63927 00:38:45.569 Removing: /var/run/dpdk/spdk_pid65765 00:38:45.569 Removing: /var/run/dpdk/spdk_pid65902 00:38:45.569 Removing: /var/run/dpdk/spdk_pid65906 00:38:45.569 Removing: /var/run/dpdk/spdk_pid65918 00:38:45.569 Removing: /var/run/dpdk/spdk_pid65958 00:38:45.569 Removing: /var/run/dpdk/spdk_pid65962 00:38:45.569 Removing: /var/run/dpdk/spdk_pid65974 00:38:45.569 Removing: /var/run/dpdk/spdk_pid66019 00:38:45.569 Removing: /var/run/dpdk/spdk_pid66023 00:38:45.569 Removing: /var/run/dpdk/spdk_pid66035 00:38:45.569 Removing: /var/run/dpdk/spdk_pid66080 00:38:45.569 Removing: /var/run/dpdk/spdk_pid66084 00:38:45.569 Removing: /var/run/dpdk/spdk_pid66096 00:38:45.569 Removing: /var/run/dpdk/spdk_pid67487 00:38:45.569 Removing: /var/run/dpdk/spdk_pid67584 00:38:45.569 Removing: /var/run/dpdk/spdk_pid68992 00:38:45.569 Removing: /var/run/dpdk/spdk_pid70756 00:38:45.569 Removing: /var/run/dpdk/spdk_pid70824 00:38:45.569 Removing: /var/run/dpdk/spdk_pid70905 00:38:45.569 Removing: /var/run/dpdk/spdk_pid71009 00:38:45.569 Removing: /var/run/dpdk/spdk_pid71102 00:38:45.569 Removing: /var/run/dpdk/spdk_pid71203 00:38:45.569 Removing: /var/run/dpdk/spdk_pid71272 00:38:45.569 Removing: /var/run/dpdk/spdk_pid71347 00:38:45.569 Removing: /var/run/dpdk/spdk_pid71452 00:38:45.569 Removing: /var/run/dpdk/spdk_pid71550 00:38:45.569 Removing: /var/run/dpdk/spdk_pid71646 00:38:45.569 Removing: /var/run/dpdk/spdk_pid71720 00:38:45.569 Removing: /var/run/dpdk/spdk_pid71799 00:38:45.569 Removing: /var/run/dpdk/spdk_pid71903 00:38:45.569 Removing: /var/run/dpdk/spdk_pid71995 00:38:45.569 Removing: /var/run/dpdk/spdk_pid72091 00:38:45.569 Removing: /var/run/dpdk/spdk_pid72165 00:38:45.569 Removing: /var/run/dpdk/spdk_pid72240 00:38:45.569 Removing: /var/run/dpdk/spdk_pid72350 00:38:45.569 Removing: /var/run/dpdk/spdk_pid72436 00:38:45.569 Removing: /var/run/dpdk/spdk_pid72532 00:38:45.569 Removing: /var/run/dpdk/spdk_pid72606 00:38:45.569 Removing: /var/run/dpdk/spdk_pid72680 00:38:45.569 Removing: /var/run/dpdk/spdk_pid72753 00:38:45.569 Removing: /var/run/dpdk/spdk_pid72829 00:38:45.569 Removing: /var/run/dpdk/spdk_pid72932 00:38:45.569 Removing: /var/run/dpdk/spdk_pid73023 00:38:45.569 Removing: /var/run/dpdk/spdk_pid73117 00:38:45.569 Removing: /var/run/dpdk/spdk_pid73186 00:38:45.569 Removing: /var/run/dpdk/spdk_pid73260 00:38:45.569 Removing: /var/run/dpdk/spdk_pid73340 00:38:45.569 Removing: /var/run/dpdk/spdk_pid73414 00:38:45.569 Removing: /var/run/dpdk/spdk_pid73512 00:38:45.569 Removing: /var/run/dpdk/spdk_pid73607 00:38:45.569 Removing: /var/run/dpdk/spdk_pid73752 00:38:45.569 Removing: /var/run/dpdk/spdk_pid74036 00:38:45.569 Removing: /var/run/dpdk/spdk_pid74068 00:38:45.569 Removing: /var/run/dpdk/spdk_pid74521 00:38:45.831 Removing: /var/run/dpdk/spdk_pid74709 00:38:45.831 Removing: /var/run/dpdk/spdk_pid74808 00:38:45.831 Removing: /var/run/dpdk/spdk_pid74918 00:38:45.831 Removing: /var/run/dpdk/spdk_pid74964 00:38:45.831 Removing: /var/run/dpdk/spdk_pid74991 00:38:45.831 Removing: /var/run/dpdk/spdk_pid75270 00:38:45.831 Removing: /var/run/dpdk/spdk_pid75331 00:38:45.831 Removing: /var/run/dpdk/spdk_pid75402 00:38:45.831 Removing: /var/run/dpdk/spdk_pid75810 00:38:45.831 Removing: /var/run/dpdk/spdk_pid75961 00:38:45.831 Removing: /var/run/dpdk/spdk_pid76766 00:38:45.831 Removing: /var/run/dpdk/spdk_pid76899 00:38:45.831 Removing: /var/run/dpdk/spdk_pid77061 00:38:45.831 Removing: /var/run/dpdk/spdk_pid77169 00:38:45.831 Removing: /var/run/dpdk/spdk_pid77455 00:38:45.831 Removing: /var/run/dpdk/spdk_pid77703 00:38:45.831 Removing: /var/run/dpdk/spdk_pid78049 00:38:45.831 Removing: /var/run/dpdk/spdk_pid78231 00:38:45.831 Removing: /var/run/dpdk/spdk_pid78408 00:38:45.831 Removing: /var/run/dpdk/spdk_pid78455 00:38:45.831 Removing: /var/run/dpdk/spdk_pid78721 00:38:45.831 Removing: /var/run/dpdk/spdk_pid78745 00:38:45.831 Removing: /var/run/dpdk/spdk_pid78792 00:38:45.831 Removing: /var/run/dpdk/spdk_pid79098 00:38:45.831 Removing: /var/run/dpdk/spdk_pid79323 00:38:45.831 Removing: /var/run/dpdk/spdk_pid80214 00:38:45.831 Removing: /var/run/dpdk/spdk_pid81170 00:38:45.831 Removing: /var/run/dpdk/spdk_pid82089 00:38:45.831 Removing: /var/run/dpdk/spdk_pid82645 00:38:45.831 Removing: /var/run/dpdk/spdk_pid82782 00:38:45.831 Removing: /var/run/dpdk/spdk_pid82863 00:38:45.831 Removing: /var/run/dpdk/spdk_pid83235 00:38:45.831 Removing: /var/run/dpdk/spdk_pid83293 00:38:45.831 Removing: /var/run/dpdk/spdk_pid84228 00:38:45.831 Removing: /var/run/dpdk/spdk_pid84858 00:38:45.831 Removing: /var/run/dpdk/spdk_pid85872 00:38:45.831 Removing: /var/run/dpdk/spdk_pid85987 00:38:45.831 Removing: /var/run/dpdk/spdk_pid86031 00:38:45.831 Removing: /var/run/dpdk/spdk_pid86084 00:38:45.831 Removing: /var/run/dpdk/spdk_pid86160 00:38:45.831 Removing: /var/run/dpdk/spdk_pid86216 00:38:45.831 Removing: /var/run/dpdk/spdk_pid86421 00:38:45.831 Removing: /var/run/dpdk/spdk_pid86501 00:38:45.831 Removing: /var/run/dpdk/spdk_pid86558 00:38:45.831 Removing: /var/run/dpdk/spdk_pid86625 00:38:45.831 Removing: /var/run/dpdk/spdk_pid86666 00:38:45.831 Removing: /var/run/dpdk/spdk_pid86728 00:38:45.831 Removing: /var/run/dpdk/spdk_pid86873 00:38:45.831 Removing: /var/run/dpdk/spdk_pid87094 00:38:45.831 Removing: /var/run/dpdk/spdk_pid87997 00:38:45.831 Removing: /var/run/dpdk/spdk_pid88911 00:38:45.831 Removing: /var/run/dpdk/spdk_pid89707 00:38:45.831 Removing: /var/run/dpdk/spdk_pid90380 00:38:45.831 Clean 00:38:45.831 12:50:52 -- common/autotest_common.sh@1453 -- # return 0 00:38:45.831 12:50:52 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:38:45.831 12:50:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:45.831 12:50:52 -- common/autotest_common.sh@10 -- # set +x 00:38:45.831 12:50:52 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:38:45.831 12:50:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:45.831 12:50:52 -- common/autotest_common.sh@10 -- # set +x 00:38:46.093 12:50:52 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:38:46.093 12:50:52 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:38:46.093 12:50:52 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:38:46.093 12:50:52 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:38:46.093 12:50:52 -- spdk/autotest.sh@398 -- # hostname 00:38:46.093 12:50:52 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:38:46.093 geninfo: WARNING: invalid characters removed from testname! 00:39:12.680 12:51:18 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:15.227 12:51:21 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:17.158 12:51:23 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:19.068 12:51:25 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:20.968 12:51:27 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:22.344 12:51:29 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:24.247 12:51:31 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:24.247 12:51:31 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:24.247 12:51:31 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:39:24.247 12:51:31 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:24.247 12:51:31 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:24.247 12:51:31 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:39:24.247 + [[ -n 5041 ]] 00:39:24.247 + sudo kill 5041 00:39:24.258 [Pipeline] } 00:39:24.272 [Pipeline] // timeout 00:39:24.277 [Pipeline] } 00:39:24.289 [Pipeline] // stage 00:39:24.294 [Pipeline] } 00:39:24.305 [Pipeline] // catchError 00:39:24.314 [Pipeline] stage 00:39:24.316 [Pipeline] { (Stop VM) 00:39:24.328 [Pipeline] sh 00:39:24.613 + vagrant halt 00:39:27.163 ==> default: Halting domain... 00:39:33.763 [Pipeline] sh 00:39:34.089 + vagrant destroy -f 00:39:36.651 ==> default: Removing domain... 00:39:37.610 [Pipeline] sh 00:39:37.896 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:39:37.906 [Pipeline] } 00:39:37.923 [Pipeline] // stage 00:39:37.929 [Pipeline] } 00:39:37.944 [Pipeline] // dir 00:39:37.950 [Pipeline] } 00:39:37.967 [Pipeline] // wrap 00:39:37.973 [Pipeline] } 00:39:37.987 [Pipeline] // catchError 00:39:37.998 [Pipeline] stage 00:39:38.000 [Pipeline] { (Epilogue) 00:39:38.015 [Pipeline] sh 00:39:38.305 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:43.603 [Pipeline] catchError 00:39:43.605 [Pipeline] { 00:39:43.619 [Pipeline] sh 00:39:43.907 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:43.907 Artifacts sizes are good 00:39:43.918 [Pipeline] } 00:39:43.932 [Pipeline] // catchError 00:39:43.942 [Pipeline] archiveArtifacts 00:39:43.950 Archiving artifacts 00:39:44.050 [Pipeline] cleanWs 00:39:44.066 [WS-CLEANUP] Deleting project workspace... 00:39:44.066 [WS-CLEANUP] Deferred wipeout is used... 00:39:44.073 [WS-CLEANUP] done 00:39:44.075 [Pipeline] } 00:39:44.088 [Pipeline] // stage 00:39:44.094 [Pipeline] } 00:39:44.107 [Pipeline] // node 00:39:44.112 [Pipeline] End of Pipeline 00:39:44.154 Finished: SUCCESS